2025-11-01 13:16:29.923215 | Job console starting 2025-11-01 13:16:29.934960 | Updating git repos 2025-11-01 13:16:30.003231 | Cloning repos into workspace 2025-11-01 13:16:30.201388 | Restoring repo states 2025-11-01 13:16:30.220216 | Merging changes 2025-11-01 13:16:30.220236 | Checking out repos 2025-11-01 13:16:30.483921 | Preparing playbooks 2025-11-01 13:16:31.028782 | Running Ansible setup 2025-11-01 13:16:35.162370 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-11-01 13:16:35.901506 | 2025-11-01 13:16:35.901676 | PLAY [Base pre] 2025-11-01 13:16:35.918585 | 2025-11-01 13:16:35.918720 | TASK [Setup log path fact] 2025-11-01 13:16:35.948796 | orchestrator | ok 2025-11-01 13:16:35.965893 | 2025-11-01 13:16:35.966036 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-11-01 13:16:36.008416 | orchestrator | ok 2025-11-01 13:16:36.021282 | 2025-11-01 13:16:36.021500 | TASK [emit-job-header : Print job information] 2025-11-01 13:16:36.061668 | # Job Information 2025-11-01 13:16:36.061918 | Ansible Version: 2.16.14 2025-11-01 13:16:36.061956 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-11-01 13:16:36.061990 | Pipeline: post 2025-11-01 13:16:36.062013 | Executor: 521e9411259a 2025-11-01 13:16:36.062034 | Triggered by: https://github.com/osism/testbed/commit/7657a146d4dc441c7be9cd14837b16c71317318b 2025-11-01 13:16:36.062059 | Event ID: f63c41f4-b724-11f0-956d-76f43e68ee03 2025-11-01 13:16:36.068911 | 2025-11-01 13:16:36.069033 | LOOP [emit-job-header : Print node information] 2025-11-01 13:16:36.187001 | orchestrator | ok: 2025-11-01 13:16:36.187281 | orchestrator | # Node Information 2025-11-01 13:16:36.187333 | orchestrator | Inventory Hostname: orchestrator 2025-11-01 13:16:36.187359 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-11-01 13:16:36.187381 | orchestrator | Username: zuul-testbed02 2025-11-01 13:16:36.187402 | orchestrator | Distro: Debian 12.12 2025-11-01 13:16:36.187426 | orchestrator | Provider: static-testbed 2025-11-01 13:16:36.187446 | orchestrator | Region: 2025-11-01 13:16:36.187467 | orchestrator | Label: testbed-orchestrator 2025-11-01 13:16:36.187487 | orchestrator | Product Name: OpenStack Nova 2025-11-01 13:16:36.187506 | orchestrator | Interface IP: 81.163.193.140 2025-11-01 13:16:36.216684 | 2025-11-01 13:16:36.216844 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-11-01 13:16:36.717197 | orchestrator -> localhost | changed 2025-11-01 13:16:36.725699 | 2025-11-01 13:16:36.725830 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-11-01 13:16:37.742695 | orchestrator -> localhost | changed 2025-11-01 13:16:37.757067 | 2025-11-01 13:16:37.757189 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-11-01 13:16:38.034331 | orchestrator -> localhost | ok 2025-11-01 13:16:38.045894 | 2025-11-01 13:16:38.046051 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-11-01 13:16:38.080266 | orchestrator | ok 2025-11-01 13:16:38.098627 | orchestrator | included: /var/lib/zuul/builds/25805c129ff442398dcdfabb9a23ba03/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-11-01 13:16:38.106618 | 2025-11-01 13:16:38.106733 | TASK [add-build-sshkey : Create Temp SSH key] 2025-11-01 13:16:39.636717 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-11-01 13:16:39.636955 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/25805c129ff442398dcdfabb9a23ba03/work/25805c129ff442398dcdfabb9a23ba03_id_rsa 2025-11-01 13:16:39.636992 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/25805c129ff442398dcdfabb9a23ba03/work/25805c129ff442398dcdfabb9a23ba03_id_rsa.pub 2025-11-01 13:16:39.637017 | orchestrator -> localhost | The key fingerprint is: 2025-11-01 13:16:39.637041 | orchestrator -> localhost | SHA256:Ut5z8cx4Sx/rEnIuNgJCAhJgIadPMLS1nRG10YMFzNQ zuul-build-sshkey 2025-11-01 13:16:39.637063 | orchestrator -> localhost | The key's randomart image is: 2025-11-01 13:16:39.637097 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-11-01 13:16:39.637119 | orchestrator -> localhost | |O=o. o*=*. | 2025-11-01 13:16:39.637140 | orchestrator -> localhost | |+*o o o+oE | 2025-11-01 13:16:39.637161 | orchestrator -> localhost | |o.o. o .. . . | 2025-11-01 13:16:39.637181 | orchestrator -> localhost | | o . . o . * | 2025-11-01 13:16:39.637201 | orchestrator -> localhost | | . o . S o o *. | 2025-11-01 13:16:39.637225 | orchestrator -> localhost | | . o + = oo| 2025-11-01 13:16:39.637245 | orchestrator -> localhost | | . . + o..| 2025-11-01 13:16:39.637265 | orchestrator -> localhost | | . + o. | 2025-11-01 13:16:39.637285 | orchestrator -> localhost | | o o .. | 2025-11-01 13:16:39.637323 | orchestrator -> localhost | +----[SHA256]-----+ 2025-11-01 13:16:39.637378 | orchestrator -> localhost | ok: Runtime: 0:00:01.047907 2025-11-01 13:16:39.645010 | 2025-11-01 13:16:39.645126 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-11-01 13:16:39.665046 | orchestrator | ok 2025-11-01 13:16:39.674981 | orchestrator | included: /var/lib/zuul/builds/25805c129ff442398dcdfabb9a23ba03/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-11-01 13:16:39.684128 | 2025-11-01 13:16:39.684229 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-11-01 13:16:39.707498 | orchestrator | skipping: Conditional result was False 2025-11-01 13:16:39.714966 | 2025-11-01 13:16:39.715067 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-11-01 13:16:40.856796 | orchestrator | changed 2025-11-01 13:16:40.866351 | 2025-11-01 13:16:40.866517 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-11-01 13:16:41.140397 | orchestrator | ok 2025-11-01 13:16:41.149423 | 2025-11-01 13:16:41.149548 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-11-01 13:16:41.605401 | orchestrator | ok 2025-11-01 13:16:41.613348 | 2025-11-01 13:16:41.613480 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-11-01 13:16:42.025595 | orchestrator | ok 2025-11-01 13:16:42.035353 | 2025-11-01 13:16:42.035495 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-11-01 13:16:42.060422 | orchestrator | skipping: Conditional result was False 2025-11-01 13:16:42.074018 | 2025-11-01 13:16:42.074171 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-11-01 13:16:42.543844 | orchestrator -> localhost | changed 2025-11-01 13:16:42.558220 | 2025-11-01 13:16:42.558377 | TASK [add-build-sshkey : Add back temp key] 2025-11-01 13:16:42.907358 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/25805c129ff442398dcdfabb9a23ba03/work/25805c129ff442398dcdfabb9a23ba03_id_rsa (zuul-build-sshkey) 2025-11-01 13:16:42.907875 | orchestrator -> localhost | ok: Runtime: 0:00:00.015477 2025-11-01 13:16:42.922345 | 2025-11-01 13:16:42.922493 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-11-01 13:16:43.344556 | orchestrator | ok 2025-11-01 13:16:43.353834 | 2025-11-01 13:16:43.353964 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-11-01 13:16:43.388461 | orchestrator | skipping: Conditional result was False 2025-11-01 13:16:43.447249 | 2025-11-01 13:16:43.447428 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-11-01 13:16:43.850623 | orchestrator | ok 2025-11-01 13:16:43.867699 | 2025-11-01 13:16:43.867830 | TASK [validate-host : Define zuul_info_dir fact] 2025-11-01 13:16:43.911833 | orchestrator | ok 2025-11-01 13:16:43.924088 | 2025-11-01 13:16:43.924253 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-11-01 13:16:44.212207 | orchestrator -> localhost | ok 2025-11-01 13:16:44.220179 | 2025-11-01 13:16:44.220332 | TASK [validate-host : Collect information about the host] 2025-11-01 13:16:45.376753 | orchestrator | ok 2025-11-01 13:16:45.390458 | 2025-11-01 13:16:45.390579 | TASK [validate-host : Sanitize hostname] 2025-11-01 13:16:45.465881 | orchestrator | ok 2025-11-01 13:16:45.474561 | 2025-11-01 13:16:45.474700 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-11-01 13:16:46.038289 | orchestrator -> localhost | changed 2025-11-01 13:16:46.050124 | 2025-11-01 13:16:46.050283 | TASK [validate-host : Collect information about zuul worker] 2025-11-01 13:16:46.473039 | orchestrator | ok 2025-11-01 13:16:46.482410 | 2025-11-01 13:16:46.483136 | TASK [validate-host : Write out all zuul information for each host] 2025-11-01 13:16:47.035965 | orchestrator -> localhost | changed 2025-11-01 13:16:47.056215 | 2025-11-01 13:16:47.056436 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-11-01 13:16:47.347398 | orchestrator | ok 2025-11-01 13:16:47.356703 | 2025-11-01 13:16:47.356830 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-11-01 13:17:29.960674 | orchestrator | changed: 2025-11-01 13:17:29.961904 | orchestrator | .d..t...... src/ 2025-11-01 13:17:29.961964 | orchestrator | .d..t...... src/github.com/ 2025-11-01 13:17:29.961987 | orchestrator | .d..t...... src/github.com/osism/ 2025-11-01 13:17:29.962006 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-11-01 13:17:29.962024 | orchestrator | RedHat.yml 2025-11-01 13:17:29.981690 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-11-01 13:17:29.981704 | orchestrator | RedHat.yml 2025-11-01 13:17:29.981748 | orchestrator | = 1.53.0"... 2025-11-01 13:17:47.915732 | orchestrator | 13:17:47.915 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-11-01 13:17:48.405435 | orchestrator | 13:17:48.405 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-11-01 13:17:49.052414 | orchestrator | 13:17:49.052 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-11-01 13:17:49.117461 | orchestrator | 13:17:49.117 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-11-01 13:17:49.838098 | orchestrator | 13:17:49.837 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-11-01 13:17:49.909018 | orchestrator | 13:17:49.908 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-11-01 13:17:50.611524 | orchestrator | 13:17:50.611 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-11-01 13:17:50.611618 | orchestrator | 13:17:50.611 STDOUT terraform: Providers are signed by their developers. 2025-11-01 13:17:50.611632 | orchestrator | 13:17:50.611 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-11-01 13:17:50.611645 | orchestrator | 13:17:50.611 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-11-01 13:17:50.611883 | orchestrator | 13:17:50.611 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-11-01 13:17:50.611948 | orchestrator | 13:17:50.611 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-11-01 13:17:50.612113 | orchestrator | 13:17:50.611 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-11-01 13:17:50.612127 | orchestrator | 13:17:50.612 STDOUT terraform: you run "tofu init" in the future. 2025-11-01 13:17:50.612206 | orchestrator | 13:17:50.612 STDOUT terraform: OpenTofu has been successfully initialized! 2025-11-01 13:17:50.612277 | orchestrator | 13:17:50.612 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-11-01 13:17:50.612382 | orchestrator | 13:17:50.612 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-11-01 13:17:50.612397 | orchestrator | 13:17:50.612 STDOUT terraform: should now work. 2025-11-01 13:17:50.612534 | orchestrator | 13:17:50.612 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-11-01 13:17:50.612642 | orchestrator | 13:17:50.612 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-11-01 13:17:50.612767 | orchestrator | 13:17:50.612 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-11-01 13:17:50.919788 | orchestrator | 13:17:50.919 STDOUT terraform: Created and switched to workspace "ci"! 2025-11-01 13:17:50.919868 | orchestrator | 13:17:50.919 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-11-01 13:17:50.919952 | orchestrator | 13:17:50.919 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-11-01 13:17:50.919967 | orchestrator | 13:17:50.919 STDOUT terraform: for this configuration. 2025-11-01 13:17:51.183740 | orchestrator | 13:17:51.183 STDOUT terraform: ci.auto.tfvars 2025-11-01 13:17:51.195700 | orchestrator | 13:17:51.195 STDOUT terraform: default_custom.tf 2025-11-01 13:17:52.217653 | orchestrator | 13:17:52.217 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-11-01 13:17:52.741938 | orchestrator | 13:17:52.741 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-11-01 13:17:53.006097 | orchestrator | 13:17:53.000 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-11-01 13:17:53.006152 | orchestrator | 13:17:53.001 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-11-01 13:17:53.006159 | orchestrator | 13:17:53.001 STDOUT terraform:  + create 2025-11-01 13:17:53.006164 | orchestrator | 13:17:53.001 STDOUT terraform:  <= read (data resources) 2025-11-01 13:17:53.006169 | orchestrator | 13:17:53.002 STDOUT terraform: OpenTofu will perform the following actions: 2025-11-01 13:17:53.006173 | orchestrator | 13:17:53.002 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-11-01 13:17:53.006185 | orchestrator | 13:17:53.002 STDOUT terraform:  # (config refers to values not yet known) 2025-11-01 13:17:53.006189 | orchestrator | 13:17:53.003 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-11-01 13:17:53.006193 | orchestrator | 13:17:53.003 STDOUT terraform:  + checksum = (known after apply) 2025-11-01 13:17:53.006196 | orchestrator | 13:17:53.003 STDOUT terraform:  + created_at = (known after apply) 2025-11-01 13:17:53.006200 | orchestrator | 13:17:53.004 STDOUT terraform:  + file = (known after apply) 2025-11-01 13:17:53.006204 | orchestrator | 13:17:53.004 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.006208 | orchestrator | 13:17:53.004 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 13:17:53.006212 | orchestrator | 13:17:53.004 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-11-01 13:17:53.006216 | orchestrator | 13:17:53.004 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-11-01 13:17:53.006219 | orchestrator | 13:17:53.004 STDOUT terraform:  + most_recent = true 2025-11-01 13:17:53.006223 | orchestrator | 13:17:53.004 STDOUT terraform:  + name = (known after apply) 2025-11-01 13:17:53.006227 | orchestrator | 13:17:53.004 STDOUT terraform:  + protected = (known after apply) 2025-11-01 13:17:53.006230 | orchestrator | 13:17:53.004 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.006234 | orchestrator | 13:17:53.005 STDOUT terraform:  + schema = (known after apply) 2025-11-01 13:17:53.006238 | orchestrator | 13:17:53.005 STDOUT terraform:  + size_bytes = (known after apply) 2025-11-01 13:17:53.006242 | orchestrator | 13:17:53.005 STDOUT terraform:  + tags = (known after apply) 2025-11-01 13:17:53.006245 | orchestrator | 13:17:53.005 STDOUT terraform:  + updated_at = (known after apply) 2025-11-01 13:17:53.006261 | orchestrator | 13:17:53.005 STDOUT terraform:  } 2025-11-01 13:17:53.006266 | orchestrator | 13:17:53.005 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-11-01 13:17:53.006270 | orchestrator | 13:17:53.005 STDOUT terraform:  # (config refers to values not yet known) 2025-11-01 13:17:53.006280 | orchestrator | 13:17:53.005 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-11-01 13:17:53.006284 | orchestrator | 13:17:53.006 STDOUT terraform:  + checksum = (known after apply) 2025-11-01 13:17:53.006288 | orchestrator | 13:17:53.006 STDOUT terraform:  + created_at = (known after apply) 2025-11-01 13:17:53.006292 | orchestrator | 13:17:53.006 STDOUT terraform:  + file = (known after apply) 2025-11-01 13:17:53.006298 | orchestrator | 13:17:53.006 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.006318 | orchestrator | 13:17:53.006 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 13:17:53.006322 | orchestrator | 13:17:53.006 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-11-01 13:17:53.006326 | orchestrator | 13:17:53.006 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-11-01 13:17:53.006329 | orchestrator | 13:17:53.006 STDOUT terraform:  + most_recent = true 2025-11-01 13:17:53.006335 | orchestrator | 13:17:53.006 STDOUT terraform:  + name = (known after apply) 2025-11-01 13:17:53.006341 | orchestrator | 13:17:53.006 STDOUT terraform:  + protected = (known after apply) 2025-11-01 13:17:53.006372 | orchestrator | 13:17:53.006 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.006458 | orchestrator | 13:17:53.006 STDOUT terraform:  + schema = (known after apply) 2025-11-01 13:17:53.006592 | orchestrator | 13:17:53.006 STDOUT terraform:  + size_bytes = (known after apply) 2025-11-01 13:17:53.006700 | orchestrator | 13:17:53.006 STDOUT terraform:  + tags = (known after apply) 2025-11-01 13:17:53.006804 | orchestrator | 13:17:53.006 STDOUT terraform:  + updated_at = (known after apply) 2025-11-01 13:17:53.006827 | orchestrator | 13:17:53.006 STDOUT terraform:  } 2025-11-01 13:17:53.007378 | orchestrator | 13:17:53.007 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-11-01 13:17:53.007552 | orchestrator | 13:17:53.007 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-11-01 13:17:53.007608 | orchestrator | 13:17:53.007 STDOUT terraform:  + content = (known after apply) 2025-11-01 13:17:53.007680 | orchestrator | 13:17:53.007 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-11-01 13:17:53.007747 | orchestrator | 13:17:53.007 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-11-01 13:17:53.007879 | orchestrator | 13:17:53.007 STDOUT terraform:  + content_md5 = (known after apply) 2025-11-01 13:17:53.007923 | orchestrator | 13:17:53.007 STDOUT terraform:  + content_sha1 = (known after apply) 2025-11-01 13:17:53.008071 | orchestrator | 13:17:53.007 STDOUT terraform:  + content_sha256 = (known after apply) 2025-11-01 13:17:53.008175 | orchestrator | 13:17:53.008 STDOUT terraform:  + content_sha512 = (known after apply) 2025-11-01 13:17:53.008224 | orchestrator | 13:17:53.008 STDOUT terraform:  + directory_permission = "0777" 2025-11-01 13:17:53.008296 | orchestrator | 13:17:53.008 STDOUT terraform:  + file_permission = "0644" 2025-11-01 13:17:53.008495 | orchestrator | 13:17:53.008 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-11-01 13:17:53.008681 | orchestrator | 13:17:53.008 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.008722 | orchestrator | 13:17:53.008 STDOUT terraform:  } 2025-11-01 13:17:53.009138 | orchestrator | 13:17:53.009 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-11-01 13:17:53.009353 | orchestrator | 13:17:53.009 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-11-01 13:17:53.009638 | orchestrator | 13:17:53.009 STDOUT terraform:  + content = (known after apply) 2025-11-01 13:17:53.009731 | orchestrator | 13:17:53.009 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-11-01 13:17:53.009817 | orchestrator | 13:17:53.009 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-11-01 13:17:53.009942 | orchestrator | 13:17:53.009 STDOUT terraform:  + content_md5 = (known after apply) 2025-11-01 13:17:53.010190 | orchestrator | 13:17:53.009 STDOUT terraform:  + content_sha1 = (known after apply) 2025-11-01 13:17:53.010287 | orchestrator | 13:17:53.010 STDOUT terraform:  + content_sha256 = (known after apply) 2025-11-01 13:17:53.010364 | orchestrator | 13:17:53.010 STDOUT terraform:  + content_sha512 = (known after apply) 2025-11-01 13:17:53.010607 | orchestrator | 13:17:53.010 STDOUT terraform:  + directory_permission = "0777" 2025-11-01 13:17:53.010655 | orchestrator | 13:17:53.010 STDOUT terraform:  + file_permission = "0644" 2025-11-01 13:17:53.010828 | orchestrator | 13:17:53.010 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-11-01 13:17:53.010983 | orchestrator | 13:17:53.010 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.011028 | orchestrator | 13:17:53.010 STDOUT terraform:  } 2025-11-01 13:17:53.016227 | orchestrator | 13:17:53.016 STDOUT terraform:  # local_file.inventory will be created 2025-11-01 13:17:53.016268 | orchestrator | 13:17:53.016 STDOUT terraform:  + resource "local_file" "inventory" { 2025-11-01 13:17:53.016291 | orchestrator | 13:17:53.016 STDOUT terraform:  + content = (known after apply) 2025-11-01 13:17:53.016491 | orchestrator | 13:17:53.016 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-11-01 13:17:53.016551 | orchestrator | 13:17:53.016 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-11-01 13:17:53.016687 | orchestrator | 13:17:53.016 STDOUT terraform:  + content_md5 = (known after apply) 2025-11-01 13:17:53.016778 | orchestrator | 13:17:53.016 STDOUT terraform:  + content_sha1 = (known after apply) 2025-11-01 13:17:53.016856 | orchestrator | 13:17:53.016 STDOUT terraform:  + content_sha256 = (known after apply) 2025-11-01 13:17:53.016936 | orchestrator | 13:17:53.016 STDOUT terraform:  + content_sha512 = (known after apply) 2025-11-01 13:17:53.016962 | orchestrator | 13:17:53.016 STDOUT terraform:  + directory_permission = "0777" 2025-11-01 13:17:53.017096 | orchestrator | 13:17:53.017 STDOUT terraform:  + file_permission = "0644" 2025-11-01 13:17:53.017125 | orchestrator | 13:17:53.017 STDOUT terraform:  + filename = "inventory.ci" 2025-11-01 13:17:53.017206 | orchestrator | 13:17:53.017 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.017216 | orchestrator | 13:17:53.017 STDOUT terraform:  } 2025-11-01 13:17:53.017827 | orchestrator | 13:17:53.017 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-11-01 13:17:53.018029 | orchestrator | 13:17:53.017 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-11-01 13:17:53.018181 | orchestrator | 13:17:53.018 STDOUT terraform:  + content = (sensitive value) 2025-11-01 13:17:53.018259 | orchestrator | 13:17:53.018 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-11-01 13:17:53.018291 | orchestrator | 13:17:53.018 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-11-01 13:17:53.018421 | orchestrator | 13:17:53.018 STDOUT terraform:  + content_md5 = (known after apply) 2025-11-01 13:17:53.018603 | orchestrator | 13:17:53.018 STDOUT terraform:  + content_sha1 = (known after apply) 2025-11-01 13:17:53.018640 | orchestrator | 13:17:53.018 STDOUT terraform:  + content_sha256 = (known after apply) 2025-11-01 13:17:53.018796 | orchestrator | 13:17:53.018 STDOUT terraform:  + content_sha512 = (known after apply) 2025-11-01 13:17:53.018891 | orchestrator | 13:17:53.018 STDOUT terraform:  + directory_permission = "0700" 2025-11-01 13:17:53.018909 | orchestrator | 13:17:53.018 STDOUT terraform:  + file_permission = "0600" 2025-11-01 13:17:53.018917 | orchestrator | 13:17:53.018 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-11-01 13:17:53.018998 | orchestrator | 13:17:53.018 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.019004 | orchestrator | 13:17:53.018 STDOUT terraform:  } 2025-11-01 13:17:53.019101 | orchestrator | 13:17:53.019 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-11-01 13:17:53.019135 | orchestrator | 13:17:53.019 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-11-01 13:17:53.019170 | orchestrator | 13:17:53.019 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.019181 | orchestrator | 13:17:53.019 STDOUT terraform:  } 2025-11-01 13:17:53.019733 | orchestrator | 13:17:53.019 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-11-01 13:17:53.019887 | orchestrator | 13:17:53.019 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-11-01 13:17:53.020046 | orchestrator | 13:17:53.019 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 13:17:53.020160 | orchestrator | 13:17:53.020 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 13:17:53.020346 | orchestrator | 13:17:53.020 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.020730 | orchestrator | 13:17:53.020 STDOUT terraform:  + image_id = (known after apply) 2025-11-01 13:17:53.020890 | orchestrator | 13:17:53.020 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 13:17:53.021026 | orchestrator | 13:17:53.020 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-11-01 13:17:53.021218 | orchestrator | 13:17:53.021 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.021259 | orchestrator | 13:17:53.021 STDOUT terraform:  + size = 80 2025-11-01 13:17:53.021316 | orchestrator | 13:17:53.021 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 13:17:53.021477 | orchestrator | 13:17:53.021 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 13:17:53.021484 | orchestrator | 13:17:53.021 STDOUT terraform:  } 2025-11-01 13:17:53.022133 | orchestrator | 13:17:53.022 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-11-01 13:17:53.022291 | orchestrator | 13:17:53.022 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-11-01 13:17:53.022587 | orchestrator | 13:17:53.022 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 13:17:53.022666 | orchestrator | 13:17:53.022 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 13:17:53.022957 | orchestrator | 13:17:53.022 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.023046 | orchestrator | 13:17:53.022 STDOUT terraform:  + image_id = (known after apply) 2025-11-01 13:17:53.023126 | orchestrator | 13:17:53.023 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 13:17:53.023293 | orchestrator | 13:17:53.023 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-11-01 13:17:53.023359 | orchestrator | 13:17:53.023 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.023439 | orchestrator | 13:17:53.023 STDOUT terraform:  + size = 80 2025-11-01 13:17:53.023582 | orchestrator | 13:17:53.023 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 13:17:53.023603 | orchestrator | 13:17:53.023 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 13:17:53.023617 | orchestrator | 13:17:53.023 STDOUT terraform:  } 2025-11-01 13:17:53.024014 | orchestrator | 13:17:53.023 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-11-01 13:17:53.024167 | orchestrator | 13:17:53.023 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-11-01 13:17:53.024221 | orchestrator | 13:17:53.024 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 13:17:53.024244 | orchestrator | 13:17:53.024 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 13:17:53.024282 | orchestrator | 13:17:53.024 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.024354 | orchestrator | 13:17:53.024 STDOUT terraform:  + image_id = (known after apply) 2025-11-01 13:17:53.024405 | orchestrator | 13:17:53.024 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 13:17:53.024470 | orchestrator | 13:17:53.024 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-11-01 13:17:53.024506 | orchestrator | 13:17:53.024 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.024552 | orchestrator | 13:17:53.024 STDOUT terraform:  + size = 80 2025-11-01 13:17:53.024650 | orchestrator | 13:17:53.024 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 13:17:53.024673 | orchestrator | 13:17:53.024 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 13:17:53.024687 | orchestrator | 13:17:53.024 STDOUT terraform:  } 2025-11-01 13:17:53.025486 | orchestrator | 13:17:53.025 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-11-01 13:17:53.025566 | orchestrator | 13:17:53.025 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-11-01 13:17:53.025635 | orchestrator | 13:17:53.025 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 13:17:53.025686 | orchestrator | 13:17:53.025 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 13:17:53.025771 | orchestrator | 13:17:53.025 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.025839 | orchestrator | 13:17:53.025 STDOUT terraform:  + image_id = (known after apply) 2025-11-01 13:17:53.025905 | orchestrator | 13:17:53.025 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 13:17:53.025992 | orchestrator | 13:17:53.025 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-11-01 13:17:53.026096 | orchestrator | 13:17:53.025 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.026142 | orchestrator | 13:17:53.026 STDOUT terraform:  + size = 80 2025-11-01 13:17:53.026190 | orchestrator | 13:17:53.026 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 13:17:53.026236 | orchestrator | 13:17:53.026 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 13:17:53.026260 | orchestrator | 13:17:53.026 STDOUT terraform:  } 2025-11-01 13:17:53.026367 | orchestrator | 13:17:53.026 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-11-01 13:17:53.026461 | orchestrator | 13:17:53.026 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-11-01 13:17:53.026535 | orchestrator | 13:17:53.026 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 13:17:53.026578 | orchestrator | 13:17:53.026 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 13:17:53.026649 | orchestrator | 13:17:53.026 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.026718 | orchestrator | 13:17:53.026 STDOUT terraform:  + image_id = (known after apply) 2025-11-01 13:17:53.026787 | orchestrator | 13:17:53.026 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 13:17:53.026873 | orchestrator | 13:17:53.026 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-11-01 13:17:53.026937 | orchestrator | 13:17:53.026 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.026972 | orchestrator | 13:17:53.026 STDOUT terraform:  + size = 80 2025-11-01 13:17:53.027018 | orchestrator | 13:17:53.026 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 13:17:53.027064 | orchestrator | 13:17:53.027 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 13:17:53.027088 | orchestrator | 13:17:53.027 STDOUT terraform:  } 2025-11-01 13:17:53.027175 | orchestrator | 13:17:53.027 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-11-01 13:17:53.027266 | orchestrator | 13:17:53.027 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-11-01 13:17:53.027373 | orchestrator | 13:17:53.027 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 13:17:53.027420 | orchestrator | 13:17:53.027 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 13:17:53.027491 | orchestrator | 13:17:53.027 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.027558 | orchestrator | 13:17:53.027 STDOUT terraform:  + image_id = (known after apply) 2025-11-01 13:17:53.027624 | orchestrator | 13:17:53.027 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 13:17:53.027712 | orchestrator | 13:17:53.027 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-11-01 13:17:53.027786 | orchestrator | 13:17:53.027 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.027841 | orchestrator | 13:17:53.027 STDOUT terraform:  + size = 80 2025-11-01 13:17:53.027889 | orchestrator | 13:17:53.027 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 13:17:53.027937 | orchestrator | 13:17:53.027 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 13:17:53.027959 | orchestrator | 13:17:53.027 STDOUT terraform:  } 2025-11-01 13:17:53.028050 | orchestrator | 13:17:53.027 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-11-01 13:17:53.028136 | orchestrator | 13:17:53.028 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-11-01 13:17:53.028197 | orchestrator | 13:17:53.028 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 13:17:53.028238 | orchestrator | 13:17:53.028 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 13:17:53.028327 | orchestrator | 13:17:53.028 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.028394 | orchestrator | 13:17:53.028 STDOUT terraform:  + image_id = (known after apply) 2025-11-01 13:17:53.028460 | orchestrator | 13:17:53.028 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 13:17:53.028544 | orchestrator | 13:17:53.028 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-11-01 13:17:53.028611 | orchestrator | 13:17:53.028 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.028653 | orchestrator | 13:17:53.028 STDOUT terraform:  + size = 80 2025-11-01 13:17:53.028698 | orchestrator | 13:17:53.028 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 13:17:53.028745 | orchestrator | 13:17:53.028 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 13:17:53.028770 | orchestrator | 13:17:53.028 STDOUT terraform:  } 2025-11-01 13:17:53.028851 | orchestrator | 13:17:53.028 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-11-01 13:17:53.028935 | orchestrator | 13:17:53.028 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-01 13:17:53.029007 | orchestrator | 13:17:53.028 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 13:17:53.029051 | orchestrator | 13:17:53.029 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 13:17:53.029123 | orchestrator | 13:17:53.029 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.029187 | orchestrator | 13:17:53.029 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 13:17:53.029260 | orchestrator | 13:17:53.029 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-11-01 13:17:53.030066 | orchestrator | 13:17:53.029 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.030090 | orchestrator | 13:17:53.029 STDOUT terraform:  + size = 20 2025-11-01 13:17:53.030094 | orchestrator | 13:17:53.029 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 13:17:53.030098 | orchestrator | 13:17:53.029 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 13:17:53.030102 | orchestrator | 13:17:53.029 STDOUT terraform:  } 2025-11-01 13:17:53.030106 | orchestrator | 13:17:53.029 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-11-01 13:17:53.030110 | orchestrator | 13:17:53.029 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-01 13:17:53.030114 | orchestrator | 13:17:53.029 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 13:17:53.030117 | orchestrator | 13:17:53.029 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 13:17:53.030126 | orchestrator | 13:17:53.029 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.030130 | orchestrator | 13:17:53.029 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 13:17:53.030133 | orchestrator | 13:17:53.029 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-11-01 13:17:53.030137 | orchestrator | 13:17:53.029 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.033086 | orchestrator | 13:17:53.030 STDOUT terraform:  + size = 20 2025-11-01 13:17:53.033119 | orchestrator | 13:17:53.033 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 13:17:53.033164 | orchestrator | 13:17:53.033 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 13:17:53.033195 | orchestrator | 13:17:53.033 STDOUT terraform:  } 2025-11-01 13:17:53.033272 | orchestrator | 13:17:53.033 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-11-01 13:17:53.033370 | orchestrator | 13:17:53.033 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-01 13:17:53.033435 | orchestrator | 13:17:53.033 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 13:17:53.033486 | orchestrator | 13:17:53.033 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 13:17:53.033543 | orchestrator | 13:17:53.033 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.033611 | orchestrator | 13:17:53.033 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 13:17:53.033693 | orchestrator | 13:17:53.033 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-11-01 13:17:53.033741 | orchestrator | 13:17:53.033 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.033789 | orchestrator | 13:17:53.033 STDOUT terraform:  + size = 20 2025-11-01 13:17:53.033796 | orchestrator | 13:17:53.033 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 13:17:53.033857 | orchestrator | 13:17:53.033 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 13:17:53.033864 | orchestrator | 13:17:53.033 STDOUT terraform:  } 2025-11-01 13:17:53.033937 | orchestrator | 13:17:53.033 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-11-01 13:17:53.034029 | orchestrator | 13:17:53.033 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-01 13:17:53.034080 | orchestrator | 13:17:53.034 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 13:17:53.034146 | orchestrator | 13:17:53.034 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 13:17:53.034196 | orchestrator | 13:17:53.034 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.034237 | orchestrator | 13:17:53.034 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 13:17:53.034332 | orchestrator | 13:17:53.034 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-11-01 13:17:53.034396 | orchestrator | 13:17:53.034 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.034468 | orchestrator | 13:17:53.034 STDOUT terraform:  + size = 20 2025-11-01 13:17:53.034476 | orchestrator | 13:17:53.034 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 13:17:53.034514 | orchestrator | 13:17:53.034 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 13:17:53.034522 | orchestrator | 13:17:53.034 STDOUT terraform:  } 2025-11-01 13:17:53.034615 | orchestrator | 13:17:53.034 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-11-01 13:17:53.034677 | orchestrator | 13:17:53.034 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-01 13:17:53.034712 | orchestrator | 13:17:53.034 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 13:17:53.034762 | orchestrator | 13:17:53.034 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 13:17:53.034815 | orchestrator | 13:17:53.034 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.034870 | orchestrator | 13:17:53.034 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 13:17:53.034926 | orchestrator | 13:17:53.034 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-11-01 13:17:53.034999 | orchestrator | 13:17:53.034 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.035006 | orchestrator | 13:17:53.034 STDOUT terraform:  + size = 20 2025-11-01 13:17:53.035043 | orchestrator | 13:17:53.035 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 13:17:53.035092 | orchestrator | 13:17:53.035 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 13:17:53.035102 | orchestrator | 13:17:53.035 STDOUT terraform:  } 2025-11-01 13:17:53.035176 | orchestrator | 13:17:53.035 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-11-01 13:17:53.035244 | orchestrator | 13:17:53.035 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-01 13:17:53.035313 | orchestrator | 13:17:53.035 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 13:17:53.035355 | orchestrator | 13:17:53.035 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 13:17:53.035413 | orchestrator | 13:17:53.035 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.035462 | orchestrator | 13:17:53.035 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 13:17:53.035533 | orchestrator | 13:17:53.035 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-11-01 13:17:53.035591 | orchestrator | 13:17:53.035 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.035627 | orchestrator | 13:17:53.035 STDOUT terraform:  + size = 20 2025-11-01 13:17:53.035659 | orchestrator | 13:17:53.035 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 13:17:53.035694 | orchestrator | 13:17:53.035 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 13:17:53.035700 | orchestrator | 13:17:53.035 STDOUT terraform:  } 2025-11-01 13:17:53.035880 | orchestrator | 13:17:53.035 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-11-01 13:17:53.035961 | orchestrator | 13:17:53.035 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-01 13:17:53.035989 | orchestrator | 13:17:53.035 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 13:17:53.036032 | orchestrator | 13:17:53.035 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 13:17:53.036086 | orchestrator | 13:17:53.036 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.036144 | orchestrator | 13:17:53.036 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 13:17:53.036207 | orchestrator | 13:17:53.036 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-11-01 13:17:53.036268 | orchestrator | 13:17:53.036 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.036298 | orchestrator | 13:17:53.036 STDOUT terraform:  + size = 20 2025-11-01 13:17:53.036360 | orchestrator | 13:17:53.036 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 13:17:53.036486 | orchestrator | 13:17:53.036 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 13:17:53.036534 | orchestrator | 13:17:53.036 STDOUT terraform:  } 2025-11-01 13:17:53.036555 | orchestrator | 13:17:53.036 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-11-01 13:17:53.036561 | orchestrator | 13:17:53.036 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-01 13:17:53.036607 | orchestrator | 13:17:53.036 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 13:17:53.036634 | orchestrator | 13:17:53.036 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 13:17:53.036697 | orchestrator | 13:17:53.036 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.036744 | orchestrator | 13:17:53.036 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 13:17:53.036859 | orchestrator | 13:17:53.036 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-11-01 13:17:53.036866 | orchestrator | 13:17:53.036 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.036901 | orchestrator | 13:17:53.036 STDOUT terraform:  + size = 20 2025-11-01 13:17:53.036959 | orchestrator | 13:17:53.036 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 13:17:53.037018 | orchestrator | 13:17:53.036 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 13:17:53.037027 | orchestrator | 13:17:53.036 STDOUT terraform:  } 2025-11-01 13:17:53.037074 | orchestrator | 13:17:53.036 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-11-01 13:17:53.037141 | orchestrator | 13:17:53.037 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-01 13:17:53.037192 | orchestrator | 13:17:53.037 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 13:17:53.037239 | orchestrator | 13:17:53.037 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 13:17:53.037284 | orchestrator | 13:17:53.037 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.037364 | orchestrator | 13:17:53.037 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 13:17:53.037415 | orchestrator | 13:17:53.037 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-11-01 13:17:53.037487 | orchestrator | 13:17:53.037 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.037539 | orchestrator | 13:17:53.037 STDOUT terraform:  + size = 20 2025-11-01 13:17:53.037549 | orchestrator | 13:17:53.037 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 13:17:53.037587 | orchestrator | 13:17:53.037 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 13:17:53.037594 | orchestrator | 13:17:53.037 STDOUT terraform:  } 2025-11-01 13:17:53.037682 | orchestrator | 13:17:53.037 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-11-01 13:17:53.037742 | orchestrator | 13:17:53.037 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-11-01 13:17:53.037804 | orchestrator | 13:17:53.037 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-01 13:17:53.037843 | orchestrator | 13:17:53.037 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-01 13:17:53.037897 | orchestrator | 13:17:53.037 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-01 13:17:53.037962 | orchestrator | 13:17:53.037 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 13:17:53.038002 | orchestrator | 13:17:53.037 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 13:17:53.038041 | orchestrator | 13:17:53.037 STDOUT terraform:  + config_drive = true 2025-11-01 13:17:53.038103 | orchestrator | 13:17:53.038 STDOUT terraform:  + created = (known after apply) 2025-11-01 13:17:53.038169 | orchestrator | 13:17:53.038 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-01 13:17:53.038216 | orchestrator | 13:17:53.038 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-11-01 13:17:53.038256 | orchestrator | 13:17:53.038 STDOUT terraform:  + force_delete = false 2025-11-01 13:17:53.038312 | orchestrator | 13:17:53.038 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-01 13:17:53.038375 | orchestrator | 13:17:53.038 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.038423 | orchestrator | 13:17:53.038 STDOUT terraform:  + image_id = (known after apply) 2025-11-01 13:17:53.038477 | orchestrator | 13:17:53.038 STDOUT terraform:  + image_name = (known after apply) 2025-11-01 13:17:53.038529 | orchestrator | 13:17:53.038 STDOUT terraform:  + key_pair = "testbed" 2025-11-01 13:17:53.038578 | orchestrator | 13:17:53.038 STDOUT terraform:  + name = "testbed-manager" 2025-11-01 13:17:53.038619 | orchestrator | 13:17:53.038 STDOUT terraform:  + power_state = "active" 2025-11-01 13:17:53.038673 | orchestrator | 13:17:53.038 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.038726 | orchestrator | 13:17:53.038 STDOUT terraform:  + security_groups = (known after apply) 2025-11-01 13:17:53.038771 | orchestrator | 13:17:53.038 STDOUT terraform:  + stop_before_destroy = false 2025-11-01 13:17:53.038819 | orchestrator | 13:17:53.038 STDOUT terraform:  + updated = (known after apply) 2025-11-01 13:17:53.038877 | orchestrator | 13:17:53.038 STDOUT terraform:  + user_data = (sensitive value) 2025-11-01 13:17:53.038885 | orchestrator | 13:17:53.038 STDOUT terraform:  + block_device { 2025-11-01 13:17:53.038926 | orchestrator | 13:17:53.038 STDOUT terraform:  + boot_index = 0 2025-11-01 13:17:53.038982 | orchestrator | 13:17:53.038 STDOUT terraform:  + delete_on_termination = false 2025-11-01 13:17:53.039021 | orchestrator | 13:17:53.038 STDOUT terraform:  + destination_type = "volume" 2025-11-01 13:17:53.039065 | orchestrator | 13:17:53.039 STDOUT terraform:  + multiattach = false 2025-11-01 13:17:53.039149 | orchestrator | 13:17:53.039 STDOUT terraform:  + source_type = "volume" 2025-11-01 13:17:53.039177 | orchestrator | 13:17:53.039 STDOUT terraform:  + uuid = (known after apply) 2025-11-01 13:17:53.039186 | orchestrator | 13:17:53.039 STDOUT terraform:  } 2025-11-01 13:17:53.039221 | orchestrator | 13:17:53.039 STDOUT terraform:  + network { 2025-11-01 13:17:53.039252 | orchestrator | 13:17:53.039 STDOUT terraform:  + access_network = false 2025-11-01 13:17:53.039333 | orchestrator | 13:17:53.039 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-01 13:17:53.039388 | orchestrator | 13:17:53.039 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-01 13:17:53.039415 | orchestrator | 13:17:53.039 STDOUT terraform:  + mac = (known after apply) 2025-11-01 13:17:53.039481 | orchestrator | 13:17:53.039 STDOUT terraform:  + name = (known after apply) 2025-11-01 13:17:53.039524 | orchestrator | 13:17:53.039 STDOUT terraform:  + port = (known after apply) 2025-11-01 13:17:53.039568 | orchestrator | 13:17:53.039 STDOUT terraform:  + uuid = (known after apply) 2025-11-01 13:17:53.039595 | orchestrator | 13:17:53.039 STDOUT terraform:  } 2025-11-01 13:17:53.039605 | orchestrator | 13:17:53.039 STDOUT terraform:  } 2025-11-01 13:17:53.039714 | orchestrator | 13:17:53.039 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-11-01 13:17:53.039736 | orchestrator | 13:17:53.039 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-11-01 13:17:53.039814 | orchestrator | 13:17:53.039 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-01 13:17:53.039859 | orchestrator | 13:17:53.039 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-01 13:17:53.039909 | orchestrator | 13:17:53.039 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-01 13:17:53.039966 | orchestrator | 13:17:53.039 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 13:17:53.040007 | orchestrator | 13:17:53.039 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 13:17:53.040040 | orchestrator | 13:17:53.039 STDOUT terraform:  + config_drive = true 2025-11-01 13:17:53.040106 | orchestrator | 13:17:53.040 STDOUT terraform:  + created = (known after apply) 2025-11-01 13:17:53.040147 | orchestrator | 13:17:53.040 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-01 13:17:53.040215 | orchestrator | 13:17:53.040 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-11-01 13:17:53.040249 | orchestrator | 13:17:53.040 STDOUT terraform:  + force_delete = false 2025-11-01 13:17:53.040318 | orchestrator | 13:17:53.040 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-01 13:17:53.040383 | orchestrator | 13:17:53.040 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.040435 | orchestrator | 13:17:53.040 STDOUT terraform:  + image_id = (known after apply) 2025-11-01 13:17:53.040491 | orchestrator | 13:17:53.040 STDOUT terraform:  + image_name = (known after apply) 2025-11-01 13:17:53.040531 | orchestrator | 13:17:53.040 STDOUT terraform:  + key_pair = "testbed" 2025-11-01 13:17:53.040588 | orchestrator | 13:17:53.040 STDOUT terraform:  + name = "testbed-node-0" 2025-11-01 13:17:53.040652 | orchestrator | 13:17:53.040 STDOUT terraform:  + power_state = "active" 2025-11-01 13:17:53.040693 | orchestrator | 13:17:53.040 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.040749 | orchestrator | 13:17:53.040 STDOUT terraform:  + security_groups = (known after apply) 2025-11-01 13:17:53.040779 | orchestrator | 13:17:53.040 STDOUT terraform:  + stop_before_destroy = false 2025-11-01 13:17:53.040833 | orchestrator | 13:17:53.040 STDOUT terraform:  + updated = (known after apply) 2025-11-01 13:17:53.040938 | orchestrator | 13:17:53.040 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-11-01 13:17:53.040945 | orchestrator | 13:17:53.040 STDOUT terraform:  + block_device { 2025-11-01 13:17:53.040980 | orchestrator | 13:17:53.040 STDOUT terraform:  + boot_index = 0 2025-11-01 13:17:53.041024 | orchestrator | 13:17:53.040 STDOUT terraform:  + delete_on_termination = false 2025-11-01 13:17:53.041089 | orchestrator | 13:17:53.041 STDOUT terraform:  + destination_type = "volume" 2025-11-01 13:17:53.041123 | orchestrator | 13:17:53.041 STDOUT terraform:  + multiattach = false 2025-11-01 13:17:53.041177 | orchestrator | 13:17:53.041 STDOUT terraform:  + source_type = "volume" 2025-11-01 13:17:53.041235 | orchestrator | 13:17:53.041 STDOUT terraform:  + uuid = (known after apply) 2025-11-01 13:17:53.041252 | orchestrator | 13:17:53.041 STDOUT terraform:  } 2025-11-01 13:17:53.041276 | orchestrator | 13:17:53.041 STDOUT terraform:  + network { 2025-11-01 13:17:53.041319 | orchestrator | 13:17:53.041 STDOUT terraform:  + access_network = false 2025-11-01 13:17:53.041368 | orchestrator | 13:17:53.041 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-01 13:17:53.041424 | orchestrator | 13:17:53.041 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-01 13:17:53.041472 | orchestrator | 13:17:53.041 STDOUT terraform:  + mac = (known after apply) 2025-11-01 13:17:53.041515 | orchestrator | 13:17:53.041 STDOUT terraform:  + name = (known after apply) 2025-11-01 13:17:53.041566 | orchestrator | 13:17:53.041 STDOUT terraform:  + port = (known after apply) 2025-11-01 13:17:53.041621 | orchestrator | 13:17:53.041 STDOUT terraform:  + uuid = (known after apply) 2025-11-01 13:17:53.041675 | orchestrator | 13:17:53.041 STDOUT terraform:  } 2025-11-01 13:17:53.041684 | orchestrator | 13:17:53.041 STDOUT terraform:  } 2025-11-01 13:17:53.041732 | orchestrator | 13:17:53.041 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-11-01 13:17:53.041793 | orchestrator | 13:17:53.041 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-11-01 13:17:53.041848 | orchestrator | 13:17:53.041 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-01 13:17:53.041914 | orchestrator | 13:17:53.041 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-01 13:17:53.041963 | orchestrator | 13:17:53.041 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-01 13:17:53.042059 | orchestrator | 13:17:53.041 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 13:17:53.042067 | orchestrator | 13:17:53.042 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 13:17:53.042128 | orchestrator | 13:17:53.042 STDOUT terraform:  + config_drive = true 2025-11-01 13:17:53.042167 | orchestrator | 13:17:53.042 STDOUT terraform:  + created = (known after apply) 2025-11-01 13:17:53.042221 | orchestrator | 13:17:53.042 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-01 13:17:53.042267 | orchestrator | 13:17:53.042 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-11-01 13:17:53.042319 | orchestrator | 13:17:53.042 STDOUT terraform:  + force_delete = false 2025-11-01 13:17:53.042452 | orchestrator | 13:17:53.042 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-01 13:17:53.042471 | orchestrator | 13:17:53.042 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.042516 | orchestrator | 13:17:53.042 STDOUT terraform:  + image_id = (known after apply) 2025-11-01 13:17:53.042594 | orchestrator | 13:17:53.042 STDOUT terraform:  + image_name = (known after apply) 2025-11-01 13:17:53.042622 | orchestrator | 13:17:53.042 STDOUT terraform:  + key_pair = "testbed" 2025-11-01 13:17:53.042687 | orchestrator | 13:17:53.042 STDOUT terraform:  + name = "testbed-node-1" 2025-11-01 13:17:53.042698 | orchestrator | 13:17:53.042 STDOUT terraform:  + power_state = "active" 2025-11-01 13:17:53.042764 | orchestrator | 13:17:53.042 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.042815 | orchestrator | 13:17:53.042 STDOUT terraform:  + security_groups = (known after apply) 2025-11-01 13:17:53.042865 | orchestrator | 13:17:53.042 STDOUT terraform:  + stop_before_destroy = false 2025-11-01 13:17:53.042918 | orchestrator | 13:17:53.042 STDOUT terraform:  + updated = (known after apply) 2025-11-01 13:17:53.042992 | orchestrator | 13:17:53.042 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-11-01 13:17:53.043020 | orchestrator | 13:17:53.042 STDOUT terraform:  + block_device { 2025-11-01 13:17:53.043061 | orchestrator | 13:17:53.043 STDOUT terraform:  + boot_index = 0 2025-11-01 13:17:53.043118 | orchestrator | 13:17:53.043 STDOUT terraform:  + delete_on_termination = false 2025-11-01 13:17:53.043170 | orchestrator | 13:17:53.043 STDOUT terraform:  + destination_type = "volume" 2025-11-01 13:17:53.043228 | orchestrator | 13:17:53.043 STDOUT terraform:  + multiattach = false 2025-11-01 13:17:53.043240 | orchestrator | 13:17:53.043 STDOUT terraform:  + source_type = "volume" 2025-11-01 13:17:53.043323 | orchestrator | 13:17:53.043 STDOUT terraform:  + uuid = (known after apply) 2025-11-01 13:17:53.043330 | orchestrator | 13:17:53.043 STDOUT terraform:  } 2025-11-01 13:17:53.043339 | orchestrator | 13:17:53.043 STDOUT terraform:  + network { 2025-11-01 13:17:53.043387 | orchestrator | 13:17:53.043 STDOUT terraform:  + access_network = false 2025-11-01 13:17:53.043462 | orchestrator | 13:17:53.043 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-01 13:17:53.043470 | orchestrator | 13:17:53.043 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-01 13:17:53.043549 | orchestrator | 13:17:53.043 STDOUT terraform:  + mac = (known after apply) 2025-11-01 13:17:53.043589 | orchestrator | 13:17:53.043 STDOUT terraform:  + name = (known after apply) 2025-11-01 13:17:53.043634 | orchestrator | 13:17:53.043 STDOUT terraform:  + port = (known after apply) 2025-11-01 13:17:53.043702 | orchestrator | 13:17:53.043 STDOUT terraform:  + uuid = (known after apply) 2025-11-01 13:17:53.043710 | orchestrator | 13:17:53.043 STDOUT terraform:  } 2025-11-01 13:17:53.043715 | orchestrator | 13:17:53.043 STDOUT terraform:  } 2025-11-01 13:17:53.043768 | orchestrator | 13:17:53.043 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-11-01 13:17:53.043826 | orchestrator | 13:17:53.043 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-11-01 13:17:53.043911 | orchestrator | 13:17:53.043 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-01 13:17:53.043950 | orchestrator | 13:17:53.043 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-01 13:17:53.044003 | orchestrator | 13:17:53.043 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-01 13:17:53.044063 | orchestrator | 13:17:53.043 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 13:17:53.044102 | orchestrator | 13:17:53.044 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 13:17:53.044170 | orchestrator | 13:17:53.044 STDOUT terraform:  + config_drive = true 2025-11-01 13:17:53.044177 | orchestrator | 13:17:53.044 STDOUT terraform:  + created = (known after apply) 2025-11-01 13:17:53.044236 | orchestrator | 13:17:53.044 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-01 13:17:53.044280 | orchestrator | 13:17:53.044 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-11-01 13:17:53.044348 | orchestrator | 13:17:53.044 STDOUT terraform:  + force_delete = false 2025-11-01 13:17:53.044415 | orchestrator | 13:17:53.044 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-01 13:17:53.044473 | orchestrator | 13:17:53.044 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.044524 | orchestrator | 13:17:53.044 STDOUT terraform:  + image_id = (known after apply) 2025-11-01 13:17:53.044575 | orchestrator | 13:17:53.044 STDOUT terraform:  + image_name = (known after apply) 2025-11-01 13:17:53.044637 | orchestrator | 13:17:53.044 STDOUT terraform:  + key_pair = "testbed" 2025-11-01 13:17:53.044664 | orchestrator | 13:17:53.044 STDOUT terraform:  + name = "testbed-node-2" 2025-11-01 13:17:53.044703 | orchestrator | 13:17:53.044 STDOUT terraform:  + power_state = "active" 2025-11-01 13:17:53.044758 | orchestrator | 13:17:53.044 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.044833 | orchestrator | 13:17:53.044 STDOUT terraform:  + security_groups = (known after apply) 2025-11-01 13:17:53.044868 | orchestrator | 13:17:53.044 STDOUT terraform:  + stop_before_destroy = false 2025-11-01 13:17:53.044924 | orchestrator | 13:17:53.044 STDOUT terraform:  + updated = (known after apply) 2025-11-01 13:17:53.045019 | orchestrator | 13:17:53.044 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-11-01 13:17:53.045047 | orchestrator | 13:17:53.045 STDOUT terraform:  + block_device { 2025-11-01 13:17:53.045084 | orchestrator | 13:17:53.045 STDOUT terraform:  + boot_index = 0 2025-11-01 13:17:53.045149 | orchestrator | 13:17:53.045 STDOUT terraform:  + delete_on_termination = false 2025-11-01 13:17:53.045180 | orchestrator | 13:17:53.045 STDOUT terraform:  + destination_type = "volume" 2025-11-01 13:17:53.045219 | orchestrator | 13:17:53.045 STDOUT terraform:  + multiattach = false 2025-11-01 13:17:53.045266 | orchestrator | 13:17:53.045 STDOUT terraform:  + source_type = "volume" 2025-11-01 13:17:53.045360 | orchestrator | 13:17:53.045 STDOUT terraform:  + uuid = (known after apply) 2025-11-01 13:17:53.045366 | orchestrator | 13:17:53.045 STDOUT terraform:  } 2025-11-01 13:17:53.045372 | orchestrator | 13:17:53.045 STDOUT terraform:  + network { 2025-11-01 13:17:53.045466 | orchestrator | 13:17:53.045 STDOUT terraform:  + access_network = false 2025-11-01 13:17:53.045484 | orchestrator | 13:17:53.045 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-01 13:17:53.045493 | orchestrator | 13:17:53.045 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-01 13:17:53.045550 | orchestrator | 13:17:53.045 STDOUT terraform:  + mac = (known after apply) 2025-11-01 13:17:53.045623 | orchestrator | 13:17:53.045 STDOUT terraform:  + name = (known after apply) 2025-11-01 13:17:53.045655 | orchestrator | 13:17:53.045 STDOUT terraform:  + port = (known after apply) 2025-11-01 13:17:53.045717 | orchestrator | 13:17:53.045 STDOUT terraform:  + uuid = (known after apply) 2025-11-01 13:17:53.045725 | orchestrator | 13:17:53.045 STDOUT terraform:  } 2025-11-01 13:17:53.045762 | orchestrator | 13:17:53.045 STDOUT terraform:  } 2025-11-01 13:17:53.045935 | orchestrator | 13:17:53.045 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-11-01 13:17:53.045942 | orchestrator | 13:17:53.045 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-11-01 13:17:53.045946 | orchestrator | 13:17:53.045 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-01 13:17:53.045981 | orchestrator | 13:17:53.045 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-01 13:17:53.046101 | orchestrator | 13:17:53.045 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-01 13:17:53.046550 | orchestrator | 13:17:53.046 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 13:17:53.046612 | orchestrator | 13:17:53.046 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 13:17:53.046618 | orchestrator | 13:17:53.046 STDOUT terraform:  + config_drive = true 2025-11-01 13:17:53.046637 | orchestrator | 13:17:53.046 STDOUT terraform:  + created = (known after apply) 2025-11-01 13:17:53.046688 | orchestrator | 13:17:53.046 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-01 13:17:53.046695 | orchestrator | 13:17:53.046 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-11-01 13:17:53.046729 | orchestrator | 13:17:53.046 STDOUT terraform:  + force_delete = false 2025-11-01 13:17:53.046810 | orchestrator | 13:17:53.046 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-01 13:17:53.046816 | orchestrator | 13:17:53.046 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.046825 | orchestrator | 13:17:53.046 STDOUT terraform:  + image_id = (known after apply) 2025-11-01 13:17:53.046859 | orchestrator | 13:17:53.046 STDOUT terraform:  + image_name = (known after apply) 2025-11-01 13:17:53.046887 | orchestrator | 13:17:53.046 STDOUT terraform:  + key_pair = "testbed" 2025-11-01 13:17:53.046919 | orchestrator | 13:17:53.046 STDOUT terraform:  + name = "testbed-node-3" 2025-11-01 13:17:53.046940 | orchestrator | 13:17:53.046 STDOUT terraform:  + power_state = "active" 2025-11-01 13:17:53.046967 | orchestrator | 13:17:53.046 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.047033 | orchestrator | 13:17:53.046 STDOUT terraform:  + security_groups = (known after apply) 2025-11-01 13:17:53.047038 | orchestrator | 13:17:53.046 STDOUT terraform:  + stop_before_destroy = false 2025-11-01 13:17:53.047059 | orchestrator | 13:17:53.047 STDOUT terraform:  + updated = (known after apply) 2025-11-01 13:17:53.047146 | orchestrator | 13:17:53.047 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-11-01 13:17:53.047155 | orchestrator | 13:17:53.047 STDOUT terraform:  + block_device { 2025-11-01 13:17:53.047159 | orchestrator | 13:17:53.047 STDOUT terraform:  + boot_index = 0 2025-11-01 13:17:53.047165 | orchestrator | 13:17:53.047 STDOUT terraform:  + delete_on_termination = false 2025-11-01 13:17:53.047188 | orchestrator | 13:17:53.047 STDOUT terraform:  + destination_type = "volume" 2025-11-01 13:17:53.047217 | orchestrator | 13:17:53.047 STDOUT terraform:  + multiattach = false 2025-11-01 13:17:53.047242 | orchestrator | 13:17:53.047 STDOUT terraform:  + source_type = "volume" 2025-11-01 13:17:53.047286 | orchestrator | 13:17:53.047 STDOUT terraform:  + uuid = (known after apply) 2025-11-01 13:17:53.047293 | orchestrator | 13:17:53.047 STDOUT terraform:  } 2025-11-01 13:17:53.047298 | orchestrator | 13:17:53.047 STDOUT terraform:  + network { 2025-11-01 13:17:53.047334 | orchestrator | 13:17:53.047 STDOUT terraform:  + access_network = false 2025-11-01 13:17:53.047367 | orchestrator | 13:17:53.047 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-01 13:17:53.047412 | orchestrator | 13:17:53.047 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-01 13:17:53.047421 | orchestrator | 13:17:53.047 STDOUT terraform:  + mac = (known after apply) 2025-11-01 13:17:53.047456 | orchestrator | 13:17:53.047 STDOUT terraform:  + name = (known after apply) 2025-11-01 13:17:53.047491 | orchestrator | 13:17:53.047 STDOUT terraform:  + port = (known after apply) 2025-11-01 13:17:53.047516 | orchestrator | 13:17:53.047 STDOUT terraform:  + uuid = (known after apply) 2025-11-01 13:17:53.047522 | orchestrator | 13:17:53.047 STDOUT terraform:  } 2025-11-01 13:17:53.047531 | orchestrator | 13:17:53.047 STDOUT terraform:  } 2025-11-01 13:17:53.047577 | orchestrator | 13:17:53.047 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-11-01 13:17:53.047615 | orchestrator | 13:17:53.047 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-11-01 13:17:53.047688 | orchestrator | 13:17:53.047 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-01 13:17:53.047696 | orchestrator | 13:17:53.047 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-01 13:17:53.047725 | orchestrator | 13:17:53.047 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-01 13:17:53.047753 | orchestrator | 13:17:53.047 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 13:17:53.047778 | orchestrator | 13:17:53.047 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 13:17:53.047784 | orchestrator | 13:17:53.047 STDOUT terraform:  + config_drive = true 2025-11-01 13:17:53.047837 | orchestrator | 13:17:53.047 STDOUT terraform:  + created = (known after apply) 2025-11-01 13:17:53.047847 | orchestrator | 13:17:53.047 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-01 13:17:53.047880 | orchestrator | 13:17:53.047 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-11-01 13:17:53.047912 | orchestrator | 13:17:53.047 STDOUT terraform:  + force_delete = false 2025-11-01 13:17:53.047940 | orchestrator | 13:17:53.047 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-01 13:17:53.047969 | orchestrator | 13:17:53.047 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.048009 | orchestrator | 13:17:53.047 STDOUT terraform:  + image_id = (known after apply) 2025-11-01 13:17:53.048038 | orchestrator | 13:17:53.047 STDOUT terraform:  + image_name = (known after apply) 2025-11-01 13:17:53.048045 | orchestrator | 13:17:53.048 STDOUT terraform:  + key_pair = "testbed" 2025-11-01 13:17:53.048083 | orchestrator | 13:17:53.048 STDOUT terraform:  + name = "testbed-node-4" 2025-11-01 13:17:53.048117 | orchestrator | 13:17:53.048 STDOUT terraform:  + power_state = "active" 2025-11-01 13:17:53.048139 | orchestrator | 13:17:53.048 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.048192 | orchestrator | 13:17:53.048 STDOUT terraform:  + security_groups = (known after apply) 2025-11-01 13:17:53.048197 | orchestrator | 13:17:53.048 STDOUT terraform:  + stop_before_destroy = false 2025-11-01 13:17:53.048229 | orchestrator | 13:17:53.048 STDOUT terraform:  + updated = (known after apply) 2025-11-01 13:17:53.048276 | orchestrator | 13:17:53.048 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-11-01 13:17:53.048283 | orchestrator | 13:17:53.048 STDOUT terraform:  + block_device { 2025-11-01 13:17:53.048321 | orchestrator | 13:17:53.048 STDOUT terraform:  + boot_index = 0 2025-11-01 13:17:53.048414 | orchestrator | 13:17:53.048 STDOUT terraform:  + delete_on_termination = false 2025-11-01 13:17:53.048435 | orchestrator | 13:17:53.048 STDOUT terraform:  + destination_type = "volume" 2025-11-01 13:17:53.048445 | orchestrator | 13:17:53.048 STDOUT terraform:  + multiattach = false 2025-11-01 13:17:53.048452 | orchestrator | 13:17:53.048 STDOUT terraform:  + source_type = "volume" 2025-11-01 13:17:53.048456 | orchestrator | 13:17:53.048 STDOUT terraform:  + uuid = (known after apply) 2025-11-01 13:17:53.048462 | orchestrator | 13:17:53.048 STDOUT terraform:  } 2025-11-01 13:17:53.048468 | orchestrator | 13:17:53.048 STDOUT terraform:  + network { 2025-11-01 13:17:53.048491 | orchestrator | 13:17:53.048 STDOUT terraform:  + access_network = false 2025-11-01 13:17:53.048554 | orchestrator | 13:17:53.048 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-01 13:17:53.048559 | orchestrator | 13:17:53.048 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-01 13:17:53.048615 | orchestrator | 13:17:53.048 STDOUT terraform:  + mac = (known after apply) 2025-11-01 13:17:53.048620 | orchestrator | 13:17:53.048 STDOUT terraform:  + name = (known after apply) 2025-11-01 13:17:53.048626 | orchestrator | 13:17:53.048 STDOUT terraform:  + port = (known after apply) 2025-11-01 13:17:53.048663 | orchestrator | 13:17:53.048 STDOUT terraform:  + uuid = (known after apply) 2025-11-01 13:17:53.048670 | orchestrator | 13:17:53.048 STDOUT terraform:  } 2025-11-01 13:17:53.048675 | orchestrator | 13:17:53.048 STDOUT terraform:  } 2025-11-01 13:17:53.048722 | orchestrator | 13:17:53.048 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-11-01 13:17:53.048785 | orchestrator | 13:17:53.048 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-11-01 13:17:53.048795 | orchestrator | 13:17:53.048 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-01 13:17:53.048830 | orchestrator | 13:17:53.048 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-01 13:17:53.048865 | orchestrator | 13:17:53.048 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-01 13:17:53.048880 | orchestrator | 13:17:53.048 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 13:17:53.048922 | orchestrator | 13:17:53.048 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 13:17:53.048928 | orchestrator | 13:17:53.048 STDOUT terraform:  + config_drive = true 2025-11-01 13:17:53.048964 | orchestrator | 13:17:53.048 STDOUT terraform:  + created = (known after apply) 2025-11-01 13:17:53.048997 | orchestrator | 13:17:53.048 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-01 13:17:53.049020 | orchestrator | 13:17:53.048 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-11-01 13:17:53.049047 | orchestrator | 13:17:53.049 STDOUT terraform:  + force_delete = false 2025-11-01 13:17:53.049070 | orchestrator | 13:17:53.049 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-01 13:17:53.049106 | orchestrator | 13:17:53.049 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.049142 | orchestrator | 13:17:53.049 STDOUT terraform:  + image_id = (known after apply) 2025-11-01 13:17:53.049172 | orchestrator | 13:17:53.049 STDOUT terraform:  + image_name = (known after apply) 2025-11-01 13:17:53.049187 | orchestrator | 13:17:53.049 STDOUT terraform:  + key_pair = "testbed" 2025-11-01 13:17:53.049218 | orchestrator | 13:17:53.049 STDOUT terraform:  + name = "testbed-node-5" 2025-11-01 13:17:53.049249 | orchestrator | 13:17:53.049 STDOUT terraform:  + power_state = "active" 2025-11-01 13:17:53.049288 | orchestrator | 13:17:53.049 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.049332 | orchestrator | 13:17:53.049 STDOUT terraform:  + security_groups = (known after apply) 2025-11-01 13:17:53.049354 | orchestrator | 13:17:53.049 STDOUT terraform:  + stop_before_destroy = false 2025-11-01 13:17:53.049391 | orchestrator | 13:17:53.049 STDOUT terraform:  + updated = (known after apply) 2025-11-01 13:17:53.049433 | orchestrator | 13:17:53.049 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-11-01 13:17:53.049460 | orchestrator | 13:17:53.049 STDOUT terraform:  + block_device { 2025-11-01 13:17:53.049467 | orchestrator | 13:17:53.049 STDOUT terraform:  + boot_index = 0 2025-11-01 13:17:53.049500 | orchestrator | 13:17:53.049 STDOUT terraform:  + delete_on_termination = false 2025-11-01 13:17:53.049544 | orchestrator | 13:17:53.049 STDOUT terraform:  + destination_type = "volume" 2025-11-01 13:17:53.049550 | orchestrator | 13:17:53.049 STDOUT terraform:  + multiattach = false 2025-11-01 13:17:53.049595 | orchestrator | 13:17:53.049 STDOUT terraform:  + source_type = "volume" 2025-11-01 13:17:53.049613 | orchestrator | 13:17:53.049 STDOUT terraform:  + uuid = (known after apply) 2025-11-01 13:17:53.049651 | orchestrator | 13:17:53.049 STDOUT terraform:  } 2025-11-01 13:17:53.049660 | orchestrator | 13:17:53.049 STDOUT terraform:  + network { 2025-11-01 13:17:53.049665 | orchestrator | 13:17:53.049 STDOUT terraform:  + access_network = false 2025-11-01 13:17:53.049689 | orchestrator | 13:17:53.049 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-01 13:17:53.049716 | orchestrator | 13:17:53.049 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-01 13:17:53.049748 | orchestrator | 13:17:53.049 STDOUT terraform:  + mac = (known after apply) 2025-11-01 13:17:53.049779 | orchestrator | 13:17:53.049 STDOUT terraform:  + name = (known after apply) 2025-11-01 13:17:53.049814 | orchestrator | 13:17:53.049 STDOUT terraform:  + port = (known after apply) 2025-11-01 13:17:53.049838 | orchestrator | 13:17:53.049 STDOUT terraform:  + uuid = (known after apply) 2025-11-01 13:17:53.049845 | orchestrator | 13:17:53.049 STDOUT terraform:  } 2025-11-01 13:17:53.049853 | orchestrator | 13:17:53.049 STDOUT terraform:  } 2025-11-01 13:17:53.049890 | orchestrator | 13:17:53.049 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-11-01 13:17:53.049953 | orchestrator | 13:17:53.049 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-11-01 13:17:53.049961 | orchestrator | 13:17:53.049 STDOUT terraform:  + fingerprint = (known after apply) 2025-11-01 13:17:53.049966 | orchestrator | 13:17:53.049 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.050002 | orchestrator | 13:17:53.049 STDOUT terraform:  + name = "testbed" 2025-11-01 13:17:53.050011 | orchestrator | 13:17:53.049 STDOUT terraform:  + private_key = (sensitive value) 2025-11-01 13:17:53.050271 | orchestrator | 13:17:53.050 STDOUT terraform:  + public_key = (known after apply) 2025-11-01 13:17:53.050278 | orchestrator | 13:17:53.050 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.050281 | orchestrator | 13:17:53.050 STDOUT terraform:  + user_id = (known after apply) 2025-11-01 13:17:53.050285 | orchestrator | 13:17:53.050 STDOUT terraform:  } 2025-11-01 13:17:53.050338 | orchestrator | 13:17:53.050 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-11-01 13:17:53.050411 | orchestrator | 13:17:53.050 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-01 13:17:53.050417 | orchestrator | 13:17:53.050 STDOUT terraform:  + device = (known after apply) 2025-11-01 13:17:53.050460 | orchestrator | 13:17:53.050 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.050467 | orchestrator | 13:17:53.050 STDOUT terraform:  + instance_id = (known after apply) 2025-11-01 13:17:53.050518 | orchestrator | 13:17:53.050 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.050548 | orchestrator | 13:17:53.050 STDOUT terraform:  + volume_id = (known after apply) 2025-11-01 13:17:53.050554 | orchestrator | 13:17:53.050 STDOUT terraform:  } 2025-11-01 13:17:53.050598 | orchestrator | 13:17:53.050 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-11-01 13:17:53.050653 | orchestrator | 13:17:53.050 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-01 13:17:53.050688 | orchestrator | 13:17:53.050 STDOUT terraform:  + device = (known after apply) 2025-11-01 13:17:53.050719 | orchestrator | 13:17:53.050 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.050738 | orchestrator | 13:17:53.050 STDOUT terraform:  + instance_id = (known after apply) 2025-11-01 13:17:53.050777 | orchestrator | 13:17:53.050 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.050788 | orchestrator | 13:17:53.050 STDOUT terraform:  + volume_id = (known after apply) 2025-11-01 13:17:53.050793 | orchestrator | 13:17:53.050 STDOUT terraform:  } 2025-11-01 13:17:53.050862 | orchestrator | 13:17:53.050 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-11-01 13:17:53.050928 | orchestrator | 13:17:53.050 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-01 13:17:53.050938 | orchestrator | 13:17:53.050 STDOUT terraform:  + device = (known after apply) 2025-11-01 13:17:53.050975 | orchestrator | 13:17:53.050 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.051010 | orchestrator | 13:17:53.050 STDOUT terraform:  + instance_id = (known after apply) 2025-11-01 13:17:53.051040 | orchestrator | 13:17:53.050 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.051046 | orchestrator | 13:17:53.051 STDOUT terraform:  + volume_id = (known after apply) 2025-11-01 13:17:53.051054 | orchestrator | 13:17:53.051 STDOUT terraform:  } 2025-11-01 13:17:53.051116 | orchestrator | 13:17:53.051 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-11-01 13:17:53.051176 | orchestrator | 13:17:53.051 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-01 13:17:53.051220 | orchestrator | 13:17:53.051 STDOUT terraform:  + device = (known after apply) 2025-11-01 13:17:53.051227 | orchestrator | 13:17:53.051 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.051233 | orchestrator | 13:17:53.051 STDOUT terraform:  + instance_id = (known after apply) 2025-11-01 13:17:53.051271 | orchestrator | 13:17:53.051 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.051321 | orchestrator | 13:17:53.051 STDOUT terraform:  + volume_id = (known after apply) 2025-11-01 13:17:53.051349 | orchestrator | 13:17:53.051 STDOUT terraform:  } 2025-11-01 13:17:53.051473 | orchestrator | 13:17:53.051 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-11-01 13:17:53.051502 | orchestrator | 13:17:53.051 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-01 13:17:53.051511 | orchestrator | 13:17:53.051 STDOUT terraform:  + device = (known after apply) 2025-11-01 13:17:53.051516 | orchestrator | 13:17:53.051 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.051521 | orchestrator | 13:17:53.051 STDOUT terraform:  + instance_id = (known after apply) 2025-11-01 13:17:53.051547 | orchestrator | 13:17:53.051 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.051574 | orchestrator | 13:17:53.051 STDOUT terraform:  + volume_id = (known after apply) 2025-11-01 13:17:53.051581 | orchestrator | 13:17:53.051 STDOUT terraform:  } 2025-11-01 13:17:53.051654 | orchestrator | 13:17:53.051 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-11-01 13:17:53.051695 | orchestrator | 13:17:53.051 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-01 13:17:53.051720 | orchestrator | 13:17:53.051 STDOUT terraform:  + device = (known after apply) 2025-11-01 13:17:53.051736 | orchestrator | 13:17:53.051 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.051808 | orchestrator | 13:17:53.051 STDOUT terraform:  + instance_id = (known after apply) 2025-11-01 13:17:53.051818 | orchestrator | 13:17:53.051 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.051834 | orchestrator | 13:17:53.051 STDOUT terraform:  + volume_id = (known after apply) 2025-11-01 13:17:53.051843 | orchestrator | 13:17:53.051 STDOUT terraform:  } 2025-11-01 13:17:53.051893 | orchestrator | 13:17:53.051 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-11-01 13:17:53.051986 | orchestrator | 13:17:53.051 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-01 13:17:53.051992 | orchestrator | 13:17:53.051 STDOUT terraform:  + device = (known after apply) 2025-11-01 13:17:53.051998 | orchestrator | 13:17:53.051 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.052024 | orchestrator | 13:17:53.051 STDOUT terraform:  + instance_id = (known after apply) 2025-11-01 13:17:53.052055 | orchestrator | 13:17:53.052 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.052064 | orchestrator | 13:17:53.052 STDOUT terraform:  + volume_id = (known after apply) 2025-11-01 13:17:53.052093 | orchestrator | 13:17:53.052 STDOUT terraform:  } 2025-11-01 13:17:53.052143 | orchestrator | 13:17:53.052 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-11-01 13:17:53.052201 | orchestrator | 13:17:53.052 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-01 13:17:53.052209 | orchestrator | 13:17:53.052 STDOUT terraform:  + device = (known after apply) 2025-11-01 13:17:53.052275 | orchestrator | 13:17:53.052 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.052283 | orchestrator | 13:17:53.052 STDOUT terraform:  + instance_id = (known after apply) 2025-11-01 13:17:53.052323 | orchestrator | 13:17:53.052 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.052365 | orchestrator | 13:17:53.052 STDOUT terraform:  + volume_id = (known after apply) 2025-11-01 13:17:53.052372 | orchestrator | 13:17:53.052 STDOUT terraform:  } 2025-11-01 13:17:53.052419 | orchestrator | 13:17:53.052 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-11-01 13:17:53.052458 | orchestrator | 13:17:53.052 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-01 13:17:53.052492 | orchestrator | 13:17:53.052 STDOUT terraform:  + device = (known after apply) 2025-11-01 13:17:53.052527 | orchestrator | 13:17:53.052 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.052560 | orchestrator | 13:17:53.052 STDOUT terraform:  + instance_id = (known after apply) 2025-11-01 13:17:53.052586 | orchestrator | 13:17:53.052 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.052619 | orchestrator | 13:17:53.052 STDOUT terraform:  + volume_id = (known after apply) 2025-11-01 13:17:53.052624 | orchestrator | 13:17:53.052 STDOUT terraform:  } 2025-11-01 13:17:53.052686 | orchestrator | 13:17:53.052 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-11-01 13:17:53.052765 | orchestrator | 13:17:53.052 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-11-01 13:17:53.052770 | orchestrator | 13:17:53.052 STDOUT terraform:  + fixed_ip = (known after apply) 2025-11-01 13:17:53.052793 | orchestrator | 13:17:53.052 STDOUT terraform:  + floating_ip = (known after apply) 2025-11-01 13:17:53.052828 | orchestrator | 13:17:53.052 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.052844 | orchestrator | 13:17:53.052 STDOUT terraform:  + port_id = (known after apply) 2025-11-01 13:17:53.052897 | orchestrator | 13:17:53.052 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.052902 | orchestrator | 13:17:53.052 STDOUT terraform:  } 2025-11-01 13:17:53.052959 | orchestrator | 13:17:53.052 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-11-01 13:17:53.053011 | orchestrator | 13:17:53.052 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-11-01 13:17:53.053020 | orchestrator | 13:17:53.052 STDOUT terraform:  + address = (known after apply) 2025-11-01 13:17:53.053055 | orchestrator | 13:17:53.053 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 13:17:53.053062 | orchestrator | 13:17:53.053 STDOUT terraform:  + dns_domain = (known after apply) 2025-11-01 13:17:53.053097 | orchestrator | 13:17:53.053 STDOUT terraform:  + dns_name = (known after apply) 2025-11-01 13:17:53.053103 | orchestrator | 13:17:53.053 STDOUT terraform:  + fixed_ip = (known after apply) 2025-11-01 13:17:53.053140 | orchestrator | 13:17:53.053 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.053153 | orchestrator | 13:17:53.053 STDOUT terraform:  + pool = "public" 2025-11-01 13:17:53.053186 | orchestrator | 13:17:53.053 STDOUT terraform:  + port_id = (known after apply) 2025-11-01 13:17:53.053210 | orchestrator | 13:17:53.053 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.053237 | orchestrator | 13:17:53.053 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-01 13:17:53.053243 | orchestrator | 13:17:53.053 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 13:17:53.053251 | orchestrator | 13:17:53.053 STDOUT terraform:  } 2025-11-01 13:17:53.053299 | orchestrator | 13:17:53.053 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-11-01 13:17:53.053376 | orchestrator | 13:17:53.053 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-11-01 13:17:53.053429 | orchestrator | 13:17:53.053 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-01 13:17:53.053467 | orchestrator | 13:17:53.053 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 13:17:53.053473 | orchestrator | 13:17:53.053 STDOUT terraform:  + availability_zone_hints = [ 2025-11-01 13:17:53.053479 | orchestrator | 13:17:53.053 STDOUT terraform:  + "nova", 2025-11-01 13:17:53.053534 | orchestrator | 13:17:53.053 STDOUT terraform:  ] 2025-11-01 13:17:53.053564 | orchestrator | 13:17:53.053 STDOUT terraform:  + dns_domain = (known after apply) 2025-11-01 13:17:53.053579 | orchestrator | 13:17:53.053 STDOUT terraform:  + external = (known after apply) 2025-11-01 13:17:53.053634 | orchestrator | 13:17:53.053 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.053641 | orchestrator | 13:17:53.053 STDOUT terraform:  + mtu = (known after apply) 2025-11-01 13:17:53.053701 | orchestrator | 13:17:53.053 STDOUT terraform:  + name = "net-testbed-management" 2025-11-01 13:17:53.053734 | orchestrator | 13:17:53.053 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-01 13:17:53.053771 | orchestrator | 13:17:53.053 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-01 13:17:53.053821 | orchestrator | 13:17:53.053 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.053864 | orchestrator | 13:17:53.053 STDOUT terraform:  + shared = (known after apply) 2025-11-01 13:17:53.053870 | orchestrator | 13:17:53.053 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 13:17:53.053912 | orchestrator | 13:17:53.053 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-11-01 13:17:53.053947 | orchestrator | 13:17:53.053 STDOUT terraform:  + segments (known after apply) 2025-11-01 13:17:53.053952 | orchestrator | 13:17:53.053 STDOUT terraform:  } 2025-11-01 13:17:53.054002 | orchestrator | 13:17:53.053 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-11-01 13:17:53.054054 | orchestrator | 13:17:53.053 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-11-01 13:17:53.054092 | orchestrator | 13:17:53.054 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-01 13:17:53.054128 | orchestrator | 13:17:53.054 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-01 13:17:53.054187 | orchestrator | 13:17:53.054 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-01 13:17:53.054200 | orchestrator | 13:17:53.054 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 13:17:53.054253 | orchestrator | 13:17:53.054 STDOUT terraform:  + device_id = (known after apply) 2025-11-01 13:17:53.054295 | orchestrator | 13:17:53.054 STDOUT terraform:  + device_owner = (known after apply) 2025-11-01 13:17:53.054348 | orchestrator | 13:17:53.054 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-01 13:17:53.054406 | orchestrator | 13:17:53.054 STDOUT terraform:  + dns_name = (known after apply) 2025-11-01 13:17:53.054434 | orchestrator | 13:17:53.054 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.054518 | orchestrator | 13:17:53.054 STDOUT terraform:  + mac_address = (known after apply) 2025-11-01 13:17:53.054530 | orchestrator | 13:17:53.054 STDOUT terraform:  + network_id = (known after apply) 2025-11-01 13:17:53.054550 | orchestrator | 13:17:53.054 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-01 13:17:53.054617 | orchestrator | 13:17:53.054 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-01 13:17:53.054628 | orchestrator | 13:17:53.054 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.054679 | orchestrator | 13:17:53.054 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-01 13:17:53.054704 | orchestrator | 13:17:53.054 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 13:17:53.054713 | orchestrator | 13:17:53.054 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 13:17:53.054757 | orchestrator | 13:17:53.054 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-01 13:17:53.054787 | orchestrator | 13:17:53.054 STDOUT terraform:  } 2025-11-01 13:17:53.054797 | orchestrator | 13:17:53.054 STDOUT terraform:  + binding (known after apply) 2025-11-01 13:17:53.054803 | orchestrator | 13:17:53.054 STDOUT terraform:  + fixed_ip { 2025-11-01 13:17:53.054837 | orchestrator | 13:17:53.054 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-11-01 13:17:53.054847 | orchestrator | 13:17:53.054 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-01 13:17:53.054863 | orchestrator | 13:17:53.054 STDOUT terraform:  } 2025-11-01 13:17:53.054872 | orchestrator | 13:17:53.054 STDOUT terraform:  } 2025-11-01 13:17:53.054964 | orchestrator | 13:17:53.054 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-11-01 13:17:53.054974 | orchestrator | 13:17:53.054 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-11-01 13:17:53.055010 | orchestrator | 13:17:53.054 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-01 13:17:53.055035 | orchestrator | 13:17:53.054 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-01 13:17:53.055073 | orchestrator | 13:17:53.055 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-01 13:17:53.055132 | orchestrator | 13:17:53.055 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 13:17:53.055159 | orchestrator | 13:17:53.055 STDOUT terraform:  + device_id = (known after apply) 2025-11-01 13:17:53.055195 | orchestrator | 13:17:53.055 STDOUT terraform:  + device_owner = (known after apply) 2025-11-01 13:17:53.055268 | orchestrator | 13:17:53.055 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-01 13:17:53.055277 | orchestrator | 13:17:53.055 STDOUT terraform:  + dns_name = (known after apply) 2025-11-01 13:17:53.055316 | orchestrator | 13:17:53.055 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.055380 | orchestrator | 13:17:53.055 STDOUT terraform:  + mac_address = (known after apply) 2025-11-01 13:17:53.055389 | orchestrator | 13:17:53.055 STDOUT terraform:  + network_id = (known after apply) 2025-11-01 13:17:53.055441 | orchestrator | 13:17:53.055 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-01 13:17:53.055467 | orchestrator | 13:17:53.055 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-01 13:17:53.055518 | orchestrator | 13:17:53.055 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.055527 | orchestrator | 13:17:53.055 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-01 13:17:53.055567 | orchestrator | 13:17:53.055 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 13:17:53.055577 | orchestrator | 13:17:53.055 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 13:17:53.055615 | orchestrator | 13:17:53.055 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-11-01 13:17:53.055621 | orchestrator | 13:17:53.055 STDOUT terraform:  } 2025-11-01 13:17:53.055644 | orchestrator | 13:17:53.055 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 13:17:53.055656 | orchestrator | 13:17:53.055 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-01 13:17:53.055693 | orchestrator | 13:17:53.055 STDOUT terraform:  } 2025-11-01 13:17:53.055705 | orchestrator | 13:17:53.055 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 13:17:53.055732 | orchestrator | 13:17:53.055 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-11-01 13:17:53.055742 | orchestrator | 13:17:53.055 STDOUT terraform:  } 2025-11-01 13:17:53.055761 | orchestrator | 13:17:53.055 STDOUT terraform:  + binding (known after apply) 2025-11-01 13:17:53.055767 | orchestrator | 13:17:53.055 STDOUT terraform:  + fixed_ip { 2025-11-01 13:17:53.055801 | orchestrator | 13:17:53.055 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-11-01 13:17:53.055830 | orchestrator | 13:17:53.055 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-01 13:17:53.055838 | orchestrator | 13:17:53.055 STDOUT terraform:  } 2025-11-01 13:17:53.055842 | orchestrator | 13:17:53.055 STDOUT terraform:  } 2025-11-01 13:17:53.055927 | orchestrator | 13:17:53.055 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-11-01 13:17:53.055936 | orchestrator | 13:17:53.055 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-11-01 13:17:53.055968 | orchestrator | 13:17:53.055 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-01 13:17:53.056019 | orchestrator | 13:17:53.055 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-01 13:17:53.056043 | orchestrator | 13:17:53.055 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-01 13:17:53.056084 | orchestrator | 13:17:53.056 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 13:17:53.056175 | orchestrator | 13:17:53.056 STDOUT terraform:  + device_id = (known after apply) 2025-11-01 13:17:53.056180 | orchestrator | 13:17:53.056 STDOUT terraform:  + device_owner = (known after apply) 2025-11-01 13:17:53.056184 | orchestrator | 13:17:53.056 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-01 13:17:53.056222 | orchestrator | 13:17:53.056 STDOUT terraform:  + dns_name = (known after apply) 2025-11-01 13:17:53.056263 | orchestrator | 13:17:53.056 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.056294 | orchestrator | 13:17:53.056 STDOUT terraform:  + mac_address = (known after apply) 2025-11-01 13:17:53.056331 | orchestrator | 13:17:53.056 STDOUT terraform:  + network_id = (known after apply) 2025-11-01 13:17:53.056371 | orchestrator | 13:17:53.056 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-01 13:17:53.056414 | orchestrator | 13:17:53.056 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-01 13:17:53.056460 | orchestrator | 13:17:53.056 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.056475 | orchestrator | 13:17:53.056 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-01 13:17:53.056534 | orchestrator | 13:17:53.056 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 13:17:53.056540 | orchestrator | 13:17:53.056 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 13:17:53.056596 | orchestrator | 13:17:53.056 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-11-01 13:17:53.056604 | orchestrator | 13:17:53.056 STDOUT terraform:  } 2025-11-01 13:17:53.056611 | orchestrator | 13:17:53.056 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 13:17:53.056616 | orchestrator | 13:17:53.056 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-01 13:17:53.056622 | orchestrator | 13:17:53.056 STDOUT terraform:  } 2025-11-01 13:17:53.056641 | orchestrator | 13:17:53.056 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 13:17:53.056701 | orchestrator | 13:17:53.056 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-11-01 13:17:53.056709 | orchestrator | 13:17:53.056 STDOUT terraform:  } 2025-11-01 13:17:53.056715 | orchestrator | 13:17:53.056 STDOUT terraform:  + binding (known after apply) 2025-11-01 13:17:53.056719 | orchestrator | 13:17:53.056 STDOUT terraform:  + fixed_ip { 2025-11-01 13:17:53.056747 | orchestrator | 13:17:53.056 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-11-01 13:17:53.056758 | orchestrator | 13:17:53.056 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-01 13:17:53.056775 | orchestrator | 13:17:53.056 STDOUT terraform:  } 2025-11-01 13:17:53.056784 | orchestrator | 13:17:53.056 STDOUT terraform:  } 2025-11-01 13:17:53.056846 | orchestrator | 13:17:53.056 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-11-01 13:17:53.056907 | orchestrator | 13:17:53.056 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-11-01 13:17:53.056917 | orchestrator | 13:17:53.056 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-01 13:17:53.056958 | orchestrator | 13:17:53.056 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-01 13:17:53.057076 | orchestrator | 13:17:53.056 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-01 13:17:53.057081 | orchestrator | 13:17:53.056 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 13:17:53.057085 | orchestrator | 13:17:53.057 STDOUT terraform:  + device_id = (known after apply) 2025-11-01 13:17:53.057091 | orchestrator | 13:17:53.057 STDOUT terraform:  + device_owner = (known after apply) 2025-11-01 13:17:53.057173 | orchestrator | 13:17:53.057 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-01 13:17:53.057181 | orchestrator | 13:17:53.057 STDOUT terraform:  + dns_name = (known after apply) 2025-11-01 13:17:53.057211 | orchestrator | 13:17:53.057 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.057246 | orchestrator | 13:17:53.057 STDOUT terraform:  + mac_address = (known after apply) 2025-11-01 13:17:53.057288 | orchestrator | 13:17:53.057 STDOUT terraform:  + network_id = (known after apply) 2025-11-01 13:17:53.057382 | orchestrator | 13:17:53.057 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-01 13:17:53.057388 | orchestrator | 13:17:53.057 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-01 13:17:53.057393 | orchestrator | 13:17:53.057 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.057433 | orchestrator | 13:17:53.057 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-01 13:17:53.057494 | orchestrator | 13:17:53.057 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 13:17:53.057499 | orchestrator | 13:17:53.057 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 13:17:53.057526 | orchestrator | 13:17:53.057 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-11-01 13:17:53.057534 | orchestrator | 13:17:53.057 STDOUT terraform:  } 2025-11-01 13:17:53.057540 | orchestrator | 13:17:53.057 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 13:17:53.057574 | orchestrator | 13:17:53.057 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-01 13:17:53.057579 | orchestrator | 13:17:53.057 STDOUT terraform:  } 2025-11-01 13:17:53.057622 | orchestrator | 13:17:53.057 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 13:17:53.057688 | orchestrator | 13:17:53.057 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-11-01 13:17:53.057694 | orchestrator | 13:17:53.057 STDOUT terraform:  } 2025-11-01 13:17:53.057700 | orchestrator | 13:17:53.057 STDOUT terraform:  + binding (known after apply) 2025-11-01 13:17:53.057706 | orchestrator | 13:17:53.057 STDOUT terraform:  + fixed_ip { 2025-11-01 13:17:53.057711 | orchestrator | 13:17:53.057 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-11-01 13:17:53.057749 | orchestrator | 13:17:53.057 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-01 13:17:53.057755 | orchestrator | 13:17:53.057 STDOUT terraform:  } 2025-11-01 13:17:53.057784 | orchestrator | 13:17:53.057 STDOUT terraform:  } 2025-11-01 13:17:53.057867 | orchestrator | 13:17:53.057 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-11-01 13:17:53.057875 | orchestrator | 13:17:53.057 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-11-01 13:17:53.057897 | orchestrator | 13:17:53.057 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-01 13:17:53.057941 | orchestrator | 13:17:53.057 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-01 13:17:53.057985 | orchestrator | 13:17:53.057 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-01 13:17:53.058006 | orchestrator | 13:17:53.057 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 13:17:53.058063 | orchestrator | 13:17:53.058 STDOUT terraform:  + device_id = (known after apply) 2025-11-01 13:17:53.058104 | orchestrator | 13:17:53.058 STDOUT terraform:  + device_owner = (known after apply) 2025-11-01 13:17:53.058152 | orchestrator | 13:17:53.058 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-01 13:17:53.058162 | orchestrator | 13:17:53.058 STDOUT terraform:  + dns_name = (known after apply) 2025-11-01 13:17:53.058237 | orchestrator | 13:17:53.058 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.058251 | orchestrator | 13:17:53.058 STDOUT terraform:  + mac_address = (known after apply) 2025-11-01 13:17:53.058278 | orchestrator | 13:17:53.058 STDOUT terraform:  + network_id = (known after apply) 2025-11-01 13:17:53.058335 | orchestrator | 13:17:53.058 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-01 13:17:53.058382 | orchestrator | 13:17:53.058 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-01 13:17:53.058438 | orchestrator | 13:17:53.058 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.058445 | orchestrator | 13:17:53.058 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-01 13:17:53.058528 | orchestrator | 13:17:53.058 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 13:17:53.058536 | orchestrator | 13:17:53.058 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 13:17:53.058540 | orchestrator | 13:17:53.058 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-11-01 13:17:53.058546 | orchestrator | 13:17:53.058 STDOUT terraform:  } 2025-11-01 13:17:53.058551 | orchestrator | 13:17:53.058 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 13:17:53.058592 | orchestrator | 13:17:53.058 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-01 13:17:53.058598 | orchestrator | 13:17:53.058 STDOUT terraform:  } 2025-11-01 13:17:53.058606 | orchestrator | 13:17:53.058 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 13:17:53.058648 | orchestrator | 13:17:53.058 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-11-01 13:17:53.058654 | orchestrator | 13:17:53.058 STDOUT terraform:  } 2025-11-01 13:17:53.058684 | orchestrator | 13:17:53.058 STDOUT terraform:  + binding (known after apply) 2025-11-01 13:17:53.058694 | orchestrator | 13:17:53.058 STDOUT terraform:  + fixed_ip { 2025-11-01 13:17:53.058721 | orchestrator | 13:17:53.058 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-11-01 13:17:53.058770 | orchestrator | 13:17:53.058 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-01 13:17:53.058775 | orchestrator | 13:17:53.058 STDOUT terraform:  } 2025-11-01 13:17:53.058779 | orchestrator | 13:17:53.058 STDOUT terraform:  } 2025-11-01 13:17:53.058811 | orchestrator | 13:17:53.058 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-11-01 13:17:53.058858 | orchestrator | 13:17:53.058 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-11-01 13:17:53.058895 | orchestrator | 13:17:53.058 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-01 13:17:53.058961 | orchestrator | 13:17:53.058 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-01 13:17:53.058966 | orchestrator | 13:17:53.058 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-01 13:17:53.059001 | orchestrator | 13:17:53.058 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 13:17:53.059050 | orchestrator | 13:17:53.058 STDOUT terraform:  + device_id = (known after apply) 2025-11-01 13:17:53.059082 | orchestrator | 13:17:53.059 STDOUT terraform:  + device_owner = (known after apply) 2025-11-01 13:17:53.059153 | orchestrator | 13:17:53.059 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-01 13:17:53.059162 | orchestrator | 13:17:53.059 STDOUT terraform:  + dns_name = (known after apply) 2025-11-01 13:17:53.059196 | orchestrator | 13:17:53.059 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.059249 | orchestrator | 13:17:53.059 STDOUT terraform:  + mac_address = (known after apply) 2025-11-01 13:17:53.059258 | orchestrator | 13:17:53.059 STDOUT terraform:  + network_id = (known after apply) 2025-11-01 13:17:53.059361 | orchestrator | 13:17:53.059 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-01 13:17:53.059368 | orchestrator | 13:17:53.059 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-01 13:17:53.059373 | orchestrator | 13:17:53.059 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.059414 | orchestrator | 13:17:53.059 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-01 13:17:53.059471 | orchestrator | 13:17:53.059 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 13:17:53.059480 | orchestrator | 13:17:53.059 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 13:17:53.059520 | orchestrator | 13:17:53.059 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-11-01 13:17:53.059529 | orchestrator | 13:17:53.059 STDOUT terraform:  } 2025-11-01 13:17:53.059534 | orchestrator | 13:17:53.059 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 13:17:53.059560 | orchestrator | 13:17:53.059 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-01 13:17:53.059567 | orchestrator | 13:17:53.059 STDOUT terraform:  } 2025-11-01 13:17:53.059573 | orchestrator | 13:17:53.059 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 13:17:53.059628 | orchestrator | 13:17:53.059 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-11-01 13:17:53.059634 | orchestrator | 13:17:53.059 STDOUT terraform:  } 2025-11-01 13:17:53.059642 | orchestrator | 13:17:53.059 STDOUT terraform:  + binding (known after apply) 2025-11-01 13:17:53.059657 | orchestrator | 13:17:53.059 STDOUT terraform:  + fixed_ip { 2025-11-01 13:17:53.059690 | orchestrator | 13:17:53.059 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-11-01 13:17:53.059722 | orchestrator | 13:17:53.059 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-01 13:17:53.059727 | orchestrator | 13:17:53.059 STDOUT terraform:  } 2025-11-01 13:17:53.059731 | orchestrator | 13:17:53.059 STDOUT terraform:  } 2025-11-01 13:17:53.059789 | orchestrator | 13:17:53.059 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-11-01 13:17:53.059824 | orchestrator | 13:17:53.059 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-11-01 13:17:53.059862 | orchestrator | 13:17:53.059 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-01 13:17:53.059923 | orchestrator | 13:17:53.059 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-01 13:17:53.059958 | orchestrator | 13:17:53.059 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-01 13:17:53.059993 | orchestrator | 13:17:53.059 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 13:17:53.060002 | orchestrator | 13:17:53.059 STDOUT terraform:  + device_id = (known after apply) 2025-11-01 13:17:53.060047 | orchestrator | 13:17:53.060 STDOUT terraform:  + device_owner = (known after apply) 2025-11-01 13:17:53.060073 | orchestrator | 13:17:53.060 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-01 13:17:53.060125 | orchestrator | 13:17:53.060 STDOUT terraform:  + dns_name = (known after apply) 2025-11-01 13:17:53.060155 | orchestrator | 13:17:53.060 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.060211 | orchestrator | 13:17:53.060 STDOUT terraform:  + mac_address = (known after apply) 2025-11-01 13:17:53.060220 | orchestrator | 13:17:53.060 STDOUT terraform:  + network_id = (known after apply) 2025-11-01 13:17:53.060280 | orchestrator | 13:17:53.060 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-01 13:17:53.060315 | orchestrator | 13:17:53.060 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-01 13:17:53.060375 | orchestrator | 13:17:53.060 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.060382 | orchestrator | 13:17:53.060 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-01 13:17:53.060502 | orchestrator | 13:17:53.060 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 13:17:53.060539 | orchestrator | 13:17:53.060 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 13:17:53.060585 | orchestrator | 13:17:53.060 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-11-01 13:17:53.060594 | orchestrator | 13:17:53.060 STDOUT terraform:  } 2025-11-01 13:17:53.060622 | orchestrator | 13:17:53.060 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 13:17:53.060641 | orchestrator | 13:17:53.060 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-01 13:17:53.060691 | orchestrator | 13:17:53.060 STDOUT terraform:  } 2025-11-01 13:17:53.060697 | orchestrator | 13:17:53.060 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 13:17:53.060703 | orchestrator | 13:17:53.060 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-11-01 13:17:53.060747 | orchestrator | 13:17:53.060 STDOUT terraform:  } 2025-11-01 13:17:53.060753 | orchestrator | 13:17:53.060 STDOUT terraform:  + binding (known after apply) 2025-11-01 13:17:53.060773 | orchestrator | 13:17:53.060 STDOUT terraform:  + fixed_ip { 2025-11-01 13:17:53.060797 | orchestrator | 13:17:53.060 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-11-01 13:17:53.060833 | orchestrator | 13:17:53.060 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-01 13:17:53.060838 | orchestrator | 13:17:53.060 STDOUT terraform:  } 2025-11-01 13:17:53.060842 | orchestrator | 13:17:53.060 STDOUT terraform:  } 2025-11-01 13:17:53.060908 | orchestrator | 13:17:53.060 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-11-01 13:17:53.060958 | orchestrator | 13:17:53.060 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-11-01 13:17:53.060967 | orchestrator | 13:17:53.060 STDOUT terraform:  + force_destroy = false 2025-11-01 13:17:53.060998 | orchestrator | 13:17:53.060 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.061033 | orchestrator | 13:17:53.060 STDOUT terraform:  + port_id = (known after apply) 2025-11-01 13:17:53.061064 | orchestrator | 13:17:53.061 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.061099 | orchestrator | 13:17:53.061 STDOUT terraform:  + router_id = (known after apply) 2025-11-01 13:17:53.061106 | orchestrator | 13:17:53.061 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-01 13:17:53.061111 | orchestrator | 13:17:53.061 STDOUT terraform:  } 2025-11-01 13:17:53.061156 | orchestrator | 13:17:53.061 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-11-01 13:17:53.061207 | orchestrator | 13:17:53.061 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-11-01 13:17:53.061230 | orchestrator | 13:17:53.061 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-01 13:17:53.061275 | orchestrator | 13:17:53.061 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 13:17:53.061282 | orchestrator | 13:17:53.061 STDOUT terraform:  + availability_zone_hints = [ 2025-11-01 13:17:53.061290 | orchestrator | 13:17:53.061 STDOUT terraform:  + "nova", 2025-11-01 13:17:53.061330 | orchestrator | 13:17:53.061 STDOUT terraform:  ] 2025-11-01 13:17:53.062826 | orchestrator | 13:17:53.061 STDOUT terraform:  + distributed = (known after apply) 2025-11-01 13:17:53.064032 | orchestrator | 13:17:53.061 STDOUT terraform:  + enable_snat = (known after apply) 2025-11-01 13:17:53.064104 | orchestrator | 13:17:53.063 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-11-01 13:17:53.064130 | orchestrator | 13:17:53.063 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-11-01 13:17:53.064143 | orchestrator | 13:17:53.063 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.064155 | orchestrator | 13:17:53.064 STDOUT terraform:  + name = "testbed" 2025-11-01 13:17:53.064166 | orchestrator | 13:17:53.064 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.064176 | orchestrator | 13:17:53.064 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 13:17:53.064191 | orchestrator | 13:17:53.064 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-11-01 13:17:53.064202 | orchestrator | 13:17:53.064 STDOUT terraform:  } 2025-11-01 13:17:53.064217 | orchestrator | 13:17:53.064 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-11-01 13:17:53.069965 | orchestrator | 13:17:53.064 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-11-01 13:17:53.070040 | orchestrator | 13:17:53.064 STDOUT terraform:  + description = "ssh" 2025-11-01 13:17:53.070052 | orchestrator | 13:17:53.064 STDOUT terraform:  + direction = "ingress" 2025-11-01 13:17:53.070061 | orchestrator | 13:17:53.064 STDOUT terraform:  + ethertype = "IPv4" 2025-11-01 13:17:53.070069 | orchestrator | 13:17:53.064 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.070094 | orchestrator | 13:17:53.064 STDOUT terraform:  + port_range_max = 22 2025-11-01 13:17:53.070102 | orchestrator | 13:17:53.064 STDOUT terraform:  + port_range_min = 22 2025-11-01 13:17:53.070110 | orchestrator | 13:17:53.064 STDOUT terraform:  + protocol = "tcp" 2025-11-01 13:17:53.070118 | orchestrator | 13:17:53.064 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.070127 | orchestrator | 13:17:53.064 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-01 13:17:53.070135 | orchestrator | 13:17:53.064 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-01 13:17:53.070143 | orchestrator | 13:17:53.064 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-01 13:17:53.070151 | orchestrator | 13:17:53.064 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-01 13:17:53.070159 | orchestrator | 13:17:53.064 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 13:17:53.070170 | orchestrator | 13:17:53.064 STDOUT terraform:  } 2025-11-01 13:17:53.070178 | orchestrator | 13:17:53.064 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-11-01 13:17:53.070187 | orchestrator | 13:17:53.064 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-11-01 13:17:53.070195 | orchestrator | 13:17:53.064 STDOUT terraform:  + description = "wireguard" 2025-11-01 13:17:53.070203 | orchestrator | 13:17:53.064 STDOUT terraform:  + direction = "ingress" 2025-11-01 13:17:53.070211 | orchestrator | 13:17:53.064 STDOUT terraform:  + ethertype = "IPv4" 2025-11-01 13:17:53.070218 | orchestrator | 13:17:53.064 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.070227 | orchestrator | 13:17:53.064 STDOUT terraform:  + port_range_max = 51820 2025-11-01 13:17:53.070235 | orchestrator | 13:17:53.064 STDOUT terraform:  + port_range_min = 51820 2025-11-01 13:17:53.070243 | orchestrator | 13:17:53.064 STDOUT terraform:  + protocol = "udp" 2025-11-01 13:17:53.070250 | orchestrator | 13:17:53.064 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.070258 | orchestrator | 13:17:53.064 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-01 13:17:53.070266 | orchestrator | 13:17:53.064 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-01 13:17:53.070274 | orchestrator | 13:17:53.064 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-01 13:17:53.070282 | orchestrator | 13:17:53.064 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-01 13:17:53.070290 | orchestrator | 13:17:53.065 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 13:17:53.070298 | orchestrator | 13:17:53.065 STDOUT terraform:  } 2025-11-01 13:17:53.070322 | orchestrator | 13:17:53.065 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-11-01 13:17:53.070330 | orchestrator | 13:17:53.065 STDOUT terraform:  + resource "openstack_networking 2025-11-01 13:17:53.070338 | orchestrator | 13:17:53.065 STDOUT terraform: _secgroup_rule_v2" "security_group_management_rule3" { 2025-11-01 13:17:53.070353 | orchestrator | 13:17:53.065 STDOUT terraform:  + direction = "ingress" 2025-11-01 13:17:53.070361 | orchestrator | 13:17:53.065 STDOUT terraform:  + ethertype = "IPv4" 2025-11-01 13:17:53.070390 | orchestrator | 13:17:53.065 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.070399 | orchestrator | 13:17:53.065 STDOUT terraform:  + protocol = "tcp" 2025-11-01 13:17:53.070408 | orchestrator | 13:17:53.065 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.070416 | orchestrator | 13:17:53.065 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-01 13:17:53.070423 | orchestrator | 13:17:53.065 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-01 13:17:53.070431 | orchestrator | 13:17:53.065 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-11-01 13:17:53.070439 | orchestrator | 13:17:53.065 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-01 13:17:53.070447 | orchestrator | 13:17:53.065 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 13:17:53.070455 | orchestrator | 13:17:53.065 STDOUT terraform:  } 2025-11-01 13:17:53.070463 | orchestrator | 13:17:53.065 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-11-01 13:17:53.070471 | orchestrator | 13:17:53.065 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-11-01 13:17:53.070485 | orchestrator | 13:17:53.065 STDOUT terraform:  + direction = "ingress" 2025-11-01 13:17:53.070493 | orchestrator | 13:17:53.065 STDOUT terraform:  + ethertype = "IPv4" 2025-11-01 13:17:53.070501 | orchestrator | 13:17:53.065 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.070509 | orchestrator | 13:17:53.065 STDOUT terraform:  + protocol = "udp" 2025-11-01 13:17:53.070517 | orchestrator | 13:17:53.065 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.070525 | orchestrator | 13:17:53.065 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-01 13:17:53.070533 | orchestrator | 13:17:53.065 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-01 13:17:53.070541 | orchestrator | 13:17:53.065 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-11-01 13:17:53.070549 | orchestrator | 13:17:53.065 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-01 13:17:53.070560 | orchestrator | 13:17:53.065 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 13:17:53.070568 | orchestrator | 13:17:53.065 STDOUT terraform:  } 2025-11-01 13:17:53.070576 | orchestrator | 13:17:53.065 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-11-01 13:17:53.070584 | orchestrator | 13:17:53.065 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-11-01 13:17:53.070592 | orchestrator | 13:17:53.065 STDOUT terraform:  + direction = "ingress" 2025-11-01 13:17:53.070600 | orchestrator | 13:17:53.066 STDOUT terraform:  + ethertype = "IPv4" 2025-11-01 13:17:53.070608 | orchestrator | 13:17:53.066 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.070622 | orchestrator | 13:17:53.066 STDOUT terraform:  + protocol = "icmp" 2025-11-01 13:17:53.070630 | orchestrator | 13:17:53.066 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.070638 | orchestrator | 13:17:53.066 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-01 13:17:53.070645 | orchestrator | 13:17:53.066 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-01 13:17:53.070653 | orchestrator | 13:17:53.066 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-01 13:17:53.070661 | orchestrator | 13:17:53.066 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-01 13:17:53.070669 | orchestrator | 13:17:53.066 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 13:17:53.070677 | orchestrator | 13:17:53.066 STDOUT terraform:  } 2025-11-01 13:17:53.070690 | orchestrator | 13:17:53.066 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-11-01 13:17:53.070698 | orchestrator | 13:17:53.066 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-11-01 13:17:53.070706 | orchestrator | 13:17:53.066 STDOUT terraform:  + direction = "ingress" 2025-11-01 13:17:53.070714 | orchestrator | 13:17:53.066 STDOUT terraform:  + ethertype = "IPv4" 2025-11-01 13:17:53.070722 | orchestrator | 13:17:53.066 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.070729 | orchestrator | 13:17:53.066 STDOUT terraform:  + protocol = "tcp" 2025-11-01 13:17:53.070737 | orchestrator | 13:17:53.066 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.070745 | orchestrator | 13:17:53.066 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-01 13:17:53.070753 | orchestrator | 13:17:53.066 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-01 13:17:53.070761 | orchestrator | 13:17:53.066 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-01 13:17:53.070769 | orchestrator | 13:17:53.066 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-01 13:17:53.070776 | orchestrator | 13:17:53.066 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 13:17:53.070784 | orchestrator | 13:17:53.066 STDOUT terraform:  } 2025-11-01 13:17:53.070792 | orchestrator | 13:17:53.066 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-11-01 13:17:53.070800 | orchestrator | 13:17:53.066 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-11-01 13:17:53.070808 | orchestrator | 13:17:53.066 STDOUT terraform:  + direction = "ingress" 2025-11-01 13:17:53.070816 | orchestrator | 13:17:53.066 STDOUT terraform:  + ethertype = "IPv4" 2025-11-01 13:17:53.070823 | orchestrator | 13:17:53.066 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.070831 | orchestrator | 13:17:53.066 STDOUT terraform:  + protocol = "udp" 2025-11-01 13:17:53.070839 | orchestrator | 13:17:53.066 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.070857 | orchestrator | 13:17:53.066 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-01 13:17:53.070865 | orchestrator | 13:17:53.066 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-01 13:17:53.070873 | orchestrator | 13:17:53.066 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-01 13:17:53.070881 | orchestrator | 13:17:53.066 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-01 13:17:53.070889 | orchestrator | 13:17:53.067 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 13:17:53.070897 | orchestrator | 13:17:53.067 STDOUT terraform:  } 2025-11-01 13:17:53.070905 | orchestrator | 13:17:53.067 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-11-01 13:17:53.070913 | orchestrator | 13:17:53.067 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-11-01 13:17:53.070921 | orchestrator | 13:17:53.067 STDOUT terraform:  + direction = "ingress" 2025-11-01 13:17:53.070929 | orchestrator | 13:17:53.067 STDOUT terraform:  + ethertype = "IPv4" 2025-11-01 13:17:53.070942 | orchestrator | 13:17:53.067 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.070955 | orchestrator | 13:17:53.067 STDOUT terraform:  + protocol = "icmp" 2025-11-01 13:17:53.070967 | orchestrator | 13:17:53.067 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.070975 | orchestrator | 13:17:53.067 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-01 13:17:53.070983 | orchestrator | 13:17:53.067 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-01 13:17:53.070995 | orchestrator | 13:17:53.067 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-01 13:17:53.071004 | orchestrator | 13:17:53.067 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-01 13:17:53.071011 | orchestrator | 13:17:53.067 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 13:17:53.071019 | orchestrator | 13:17:53.067 STDOUT terraform:  } 2025-11-01 13:17:53.071027 | orchestrator | 13:17:53.067 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-11-01 13:17:53.071035 | orchestrator | 13:17:53.067 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-11-01 13:17:53.071043 | orchestrator | 13:17:53.067 STDOUT terraform:  + description = "vrrp" 2025-11-01 13:17:53.071051 | orchestrator | 13:17:53.067 STDOUT terraform:  + direction = "ingress" 2025-11-01 13:17:53.071059 | orchestrator | 13:17:53.067 STDOUT terraform:  + ethertype = "IPv4" 2025-11-01 13:17:53.071066 | orchestrator | 13:17:53.067 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.071074 | orchestrator | 13:17:53.067 STDOUT terraform:  + protocol = "112" 2025-11-01 13:17:53.071082 | orchestrator | 13:17:53.067 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.071090 | orchestrator | 13:17:53.068 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-01 13:17:53.071108 | orchestrator | 13:17:53.068 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-01 13:17:53.071116 | orchestrator | 13:17:53.068 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-01 13:17:53.071124 | orchestrator | 13:17:53.068 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-01 13:17:53.071131 | orchestrator | 13:17:53.068 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 13:17:53.071139 | orchestrator | 13:17:53.068 STDOUT terraform:  } 2025-11-01 13:17:53.071147 | orchestrator | 13:17:53.068 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-11-01 13:17:53.071155 | orchestrator | 13:17:53.068 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-11-01 13:17:53.071163 | orchestrator | 13:17:53.068 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 13:17:53.071171 | orchestrator | 13:17:53.068 STDOUT terraform:  + description = "management security group" 2025-11-01 13:17:53.071179 | orchestrator | 13:17:53.068 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.071186 | orchestrator | 13:17:53.068 STDOUT terraform:  + name = "testbed-management" 2025-11-01 13:17:53.071194 | orchestrator | 13:17:53.068 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.071202 | orchestrator | 13:17:53.068 STDOUT terraform:  + stateful = (known after apply) 2025-11-01 13:17:53.071209 | orchestrator | 13:17:53.068 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 13:17:53.071217 | orchestrator | 13:17:53.068 STDOUT terraform:  } 2025-11-01 13:17:53.071225 | orchestrator | 13:17:53.068 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-11-01 13:17:53.071233 | orchestrator | 13:17:53.068 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-11-01 13:17:53.071241 | orchestrator | 13:17:53.068 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 13:17:53.071248 | orchestrator | 13:17:53.068 STDOUT terraform:  + description = "node security group" 2025-11-01 13:17:53.071256 | orchestrator | 13:17:53.068 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.071264 | orchestrator | 13:17:53.068 STDOUT terraform:  + name = "testbed-node" 2025-11-01 13:17:53.071272 | orchestrator | 13:17:53.068 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.071279 | orchestrator | 13:17:53.068 STDOUT terraform:  + stateful = (known after apply) 2025-11-01 13:17:53.071287 | orchestrator | 13:17:53.068 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 13:17:53.071324 | orchestrator | 13:17:53.068 STDOUT terraform:  } 2025-11-01 13:17:53.071337 | orchestrator | 13:17:53.068 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-11-01 13:17:53.071345 | orchestrator | 13:17:53.069 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-11-01 13:17:53.071353 | orchestrator | 13:17:53.069 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 13:17:53.071361 | orchestrator | 13:17:53.069 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-11-01 13:17:53.071369 | orchestrator | 13:17:53.069 STDOUT terraform:  + dns_nameservers = [ 2025-11-01 13:17:53.071383 | orchestrator | 13:17:53.069 STDOUT terraform:  + "8.8.8.8", 2025-11-01 13:17:53.071391 | orchestrator | 13:17:53.069 STDOUT terraform:  + "9.9.9.9", 2025-11-01 13:17:53.071399 | orchestrator | 13:17:53.069 STDOUT terraform:  ] 2025-11-01 13:17:53.071407 | orchestrator | 13:17:53.069 STDOUT terraform:  + enable_dhcp = true 2025-11-01 13:17:53.071415 | orchestrator | 13:17:53.069 STDOUT terraform:  + gateway_ip = (known after apply) 2025-11-01 13:17:53.071423 | orchestrator | 13:17:53.069 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.071430 | orchestrator | 13:17:53.069 STDOUT terraform:  + ip_version = 4 2025-11-01 13:17:53.071438 | orchestrator | 13:17:53.069 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-11-01 13:17:53.071446 | orchestrator | 13:17:53.069 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-11-01 13:17:53.071454 | orchestrator | 13:17:53.069 STDOUT terraform:  + name = "subnet-testbed-management" 2025-11-01 13:17:53.071462 | orchestrator | 13:17:53.069 STDOUT terraform:  + network_id = (known after apply) 2025-11-01 13:17:53.071470 | orchestrator | 13:17:53.069 STDOUT terraform:  + no_gateway = false 2025-11-01 13:17:53.071478 | orchestrator | 13:17:53.069 STDOUT terraform:  + region = (known after apply) 2025-11-01 13:17:53.071531 | orchestrator | 13:17:53.069 STDOUT terraform:  + service_types = (known after apply) 2025-11-01 13:17:53.071541 | orchestrator | 13:17:53.069 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 13:17:53.071549 | orchestrator | 13:17:53.069 STDOUT terraform:  + allocation_pool { 2025-11-01 13:17:53.071561 | orchestrator | 13:17:53.069 STDOUT terraform:  + end = "192.168.31.250" 2025-11-01 13:17:53.071569 | orchestrator | 13:17:53.069 STDOUT terraform:  + start = "192.168.31.200" 2025-11-01 13:17:53.071577 | orchestrator | 13:17:53.069 STDOUT terraform:  } 2025-11-01 13:17:53.071585 | orchestrator | 13:17:53.069 STDOUT terraform:  } 2025-11-01 13:17:53.071593 | orchestrator | 13:17:53.069 STDOUT terraform:  # terraform_data.image will be created 2025-11-01 13:17:53.071600 | orchestrator | 13:17:53.069 STDOUT terraform:  + resource "terraform_data" "image" { 2025-11-01 13:17:53.071608 | orchestrator | 13:17:53.069 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.071616 | orchestrator | 13:17:53.069 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-11-01 13:17:53.071624 | orchestrator | 13:17:53.069 STDOUT terraform:  + output = (known after apply) 2025-11-01 13:17:53.071632 | orchestrator | 13:17:53.069 STDOUT terraform:  } 2025-11-01 13:17:53.071640 | orchestrator | 13:17:53.069 STDOUT terraform:  # terraform_data.image_node will be created 2025-11-01 13:17:53.071648 | orchestrator | 13:17:53.069 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-11-01 13:17:53.071656 | orchestrator | 13:17:53.069 STDOUT terraform:  + id = (known after apply) 2025-11-01 13:17:53.071663 | orchestrator | 13:17:53.069 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-11-01 13:17:53.071672 | orchestrator | 13:17:53.069 STDOUT terraform:  + output = (known after apply) 2025-11-01 13:17:53.071680 | orchestrator | 13:17:53.069 STDOUT terraform:  } 2025-11-01 13:17:53.071693 | orchestrator | 13:17:53.069 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-11-01 13:17:53.071701 | orchestrator | 13:17:53.069 STDOUT terraform: Changes to Outputs: 2025-11-01 13:17:53.071709 | orchestrator | 13:17:53.069 STDOUT terraform:  + manager_address = (sensitive value) 2025-11-01 13:17:53.071722 | orchestrator | 13:17:53.069 STDOUT terraform:  + private_key = (sensitive value) 2025-11-01 13:17:53.288097 | orchestrator | 13:17:53.287 STDOUT terraform: terraform_data.image_node: Creating... 2025-11-01 13:17:53.288161 | orchestrator | 13:17:53.287 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=4c83fb22-e4ad-0eb5-a326-a39552c77558] 2025-11-01 13:17:53.288954 | orchestrator | 13:17:53.288 STDOUT terraform: terraform_data.image: Creating... 2025-11-01 13:17:53.288999 | orchestrator | 13:17:53.288 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=1e1e900d-7747-3953-a4fe-4f744a43bade] 2025-11-01 13:17:53.297623 | orchestrator | 13:17:53.297 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-11-01 13:17:53.310433 | orchestrator | 13:17:53.310 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-11-01 13:17:53.312546 | orchestrator | 13:17:53.311 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-11-01 13:17:53.318667 | orchestrator | 13:17:53.318 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-11-01 13:17:53.321036 | orchestrator | 13:17:53.319 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-11-01 13:17:53.321063 | orchestrator | 13:17:53.320 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-11-01 13:17:53.321069 | orchestrator | 13:17:53.320 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-11-01 13:17:53.328574 | orchestrator | 13:17:53.328 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-11-01 13:17:53.335605 | orchestrator | 13:17:53.335 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-11-01 13:17:53.340593 | orchestrator | 13:17:53.340 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-11-01 13:17:53.858851 | orchestrator | 13:17:53.856 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-11-01 13:17:53.860149 | orchestrator | 13:17:53.859 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-11-01 13:17:53.866087 | orchestrator | 13:17:53.864 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-11-01 13:17:53.866145 | orchestrator | 13:17:53.865 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-11-01 13:17:53.879024 | orchestrator | 13:17:53.878 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-11-01 13:17:53.882845 | orchestrator | 13:17:53.882 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-11-01 13:17:54.410247 | orchestrator | 13:17:54.408 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=7f6f4cf2-5bfa-4e90-ac99-83579996e650] 2025-11-01 13:17:54.419130 | orchestrator | 13:17:54.417 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-11-01 13:17:56.968802 | orchestrator | 13:17:56.968 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=d5b7cda2-7cd1-4139-8c09-f2864ed6115a] 2025-11-01 13:17:56.975811 | orchestrator | 13:17:56.975 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=08ca9d91-9929-4ba3-9cad-ed75b64a043e] 2025-11-01 13:17:56.978889 | orchestrator | 13:17:56.976 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-11-01 13:17:56.982747 | orchestrator | 13:17:56.982 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-11-01 13:17:56.983568 | orchestrator | 13:17:56.983 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=f57a5620-543a-43ae-a22d-8a42cad6fb24] 2025-11-01 13:17:56.991083 | orchestrator | 13:17:56.990 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-11-01 13:17:57.004213 | orchestrator | 13:17:57.004 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=7d89e604-ccfa-4ce6-abe5-76180138882d] 2025-11-01 13:17:57.010096 | orchestrator | 13:17:57.009 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-11-01 13:17:57.023974 | orchestrator | 13:17:57.023 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=072d7475-b9a0-4b66-89cc-e4fcf46016ff] 2025-11-01 13:17:57.031282 | orchestrator | 13:17:57.031 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-11-01 13:17:57.034101 | orchestrator | 13:17:57.033 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=c17a8236-4766-4598-abab-5d58d5ce65a6] 2025-11-01 13:17:57.039916 | orchestrator | 13:17:57.039 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-11-01 13:17:57.130339 | orchestrator | 13:17:57.129 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=dbba508b-4e10-452f-8431-011284f42e7d] 2025-11-01 13:17:57.143211 | orchestrator | 13:17:57.142 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=4fee078c-1565-4ab1-bdda-b8bebdd42045] 2025-11-01 13:17:57.147156 | orchestrator | 13:17:57.146 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-11-01 13:17:57.149339 | orchestrator | 13:17:57.149 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=c347dc72-435c-43d5-a9cf-2c60f1de142e] 2025-11-01 13:17:57.152291 | orchestrator | 13:17:57.152 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=271c6b42a5d467080935e6e79b86068789dee28f] 2025-11-01 13:17:57.155669 | orchestrator | 13:17:57.155 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-11-01 13:17:57.162460 | orchestrator | 13:17:57.162 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-11-01 13:17:57.166401 | orchestrator | 13:17:57.166 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=f8c2ffc1a9440576344e9b45976be25142ffe2ba] 2025-11-01 13:17:57.780803 | orchestrator | 13:17:57.780 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=de354a54-52ae-4017-b037-a7d8d0c0cd50] 2025-11-01 13:17:58.695896 | orchestrator | 13:17:58.695 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 2s [id=d2fea1c4-5330-4da5-b40d-591bb82df852] 2025-11-01 13:17:58.704629 | orchestrator | 13:17:58.704 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-11-01 13:18:00.372479 | orchestrator | 13:18:00.372 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=9fbd7e64-ce07-4fda-ab82-c32e390fbede] 2025-11-01 13:18:00.412672 | orchestrator | 13:18:00.412 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=5c449378-6b5d-40f6-b01f-2139793b2b74] 2025-11-01 13:18:00.431141 | orchestrator | 13:18:00.430 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=05ca2b4f-b6fa-412a-995c-f659adce7ca3] 2025-11-01 13:18:00.457860 | orchestrator | 13:18:00.457 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=ca5cc0c4-c9f2-4f36-a6e7-4d61f5e49740] 2025-11-01 13:18:00.501978 | orchestrator | 13:18:00.501 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=f1202cd8-baed-4ac9-b605-f8cc9e76d4d5] 2025-11-01 13:18:00.527854 | orchestrator | 13:18:00.527 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=fd211c63-6e70-45f0-80e9-be44e116b0ad] 2025-11-01 13:18:02.168694 | orchestrator | 13:18:02.168 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=90842517-6ac1-46c7-acdc-46cd0af74600] 2025-11-01 13:18:02.174538 | orchestrator | 13:18:02.174 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-11-01 13:18:02.176599 | orchestrator | 13:18:02.176 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-11-01 13:18:02.178603 | orchestrator | 13:18:02.178 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-11-01 13:18:02.375009 | orchestrator | 13:18:02.374 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=58c2fe3f-d86c-48f8-ba66-23aa41088534] 2025-11-01 13:18:02.396444 | orchestrator | 13:18:02.396 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-11-01 13:18:02.397733 | orchestrator | 13:18:02.397 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-11-01 13:18:02.398858 | orchestrator | 13:18:02.398 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-11-01 13:18:02.398889 | orchestrator | 13:18:02.398 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-11-01 13:18:02.398897 | orchestrator | 13:18:02.398 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-11-01 13:18:02.402806 | orchestrator | 13:18:02.402 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-11-01 13:18:02.403291 | orchestrator | 13:18:02.403 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-11-01 13:18:02.404048 | orchestrator | 13:18:02.403 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-11-01 13:18:02.411051 | orchestrator | 13:18:02.410 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=f71e4641-c72a-4598-9819-643eefd60de9] 2025-11-01 13:18:02.417528 | orchestrator | 13:18:02.417 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-11-01 13:18:02.912399 | orchestrator | 13:18:02.912 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=fc49ce70-2ca1-4ab5-bcf9-f7f66c84d6c6] 2025-11-01 13:18:02.926726 | orchestrator | 13:18:02.926 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-11-01 13:18:02.988704 | orchestrator | 13:18:02.988 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=d5cbe547-131e-471c-bad8-f0dff1e0e2f6] 2025-11-01 13:18:02.994729 | orchestrator | 13:18:02.994 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-11-01 13:18:03.193437 | orchestrator | 13:18:03.193 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=b783b34b-3a2f-4ded-9160-d5eee1efa031] 2025-11-01 13:18:03.200664 | orchestrator | 13:18:03.200 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-11-01 13:18:03.271412 | orchestrator | 13:18:03.271 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=570e4d5c-179c-4fb7-8bcb-f102d92eb0a4] 2025-11-01 13:18:03.273287 | orchestrator | 13:18:03.273 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=78605ead-973e-42b3-b03c-729a6f2ecc72] 2025-11-01 13:18:03.277053 | orchestrator | 13:18:03.276 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=ea6e700a-56b8-4897-9d57-41c809951b45] 2025-11-01 13:18:03.278783 | orchestrator | 13:18:03.278 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-11-01 13:18:03.280015 | orchestrator | 13:18:03.279 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-11-01 13:18:03.286850 | orchestrator | 13:18:03.286 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-11-01 13:18:03.442534 | orchestrator | 13:18:03.442 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=12bc2d8b-22de-461b-b78c-7af897484a29] 2025-11-01 13:18:03.448275 | orchestrator | 13:18:03.448 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-11-01 13:18:03.640207 | orchestrator | 13:18:03.639 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=1143efdb-da79-4a98-a54e-0e91af19eef6] 2025-11-01 13:18:03.650756 | orchestrator | 13:18:03.650 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 2s [id=445c7774-ae34-48b7-9b57-933681b58070] 2025-11-01 13:18:03.826944 | orchestrator | 13:18:03.826 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 2s [id=10899d8e-525b-4dfc-b8cf-41435f619065] 2025-11-01 13:18:04.122742 | orchestrator | 13:18:04.122 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=d5589bad-db00-44c0-b5b7-1ac275587ca7] 2025-11-01 13:18:04.292703 | orchestrator | 13:18:04.292 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=32b6f1e4-8779-4eb1-b0d2-b962e8b1b5b1] 2025-11-01 13:18:04.344857 | orchestrator | 13:18:04.344 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=0d19cd6e-63eb-42ba-bc10-d19c33d5b9f1] 2025-11-01 13:18:04.412001 | orchestrator | 13:18:04.411 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 2s [id=bb493ff1-d387-4b78-89de-f49fcf990c43] 2025-11-01 13:18:04.457339 | orchestrator | 13:18:04.457 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=22937fa8-d25a-4673-bcd4-3cd286afabcb] 2025-11-01 13:18:04.650457 | orchestrator | 13:18:04.650 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 2s [id=9efca028-79a3-4c34-b8f8-b49a4e3fdc0c] 2025-11-01 13:18:04.849102 | orchestrator | 13:18:04.848 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=3f85f534-55e4-4a90-acaf-e81769fa2acf] 2025-11-01 13:18:04.858632 | orchestrator | 13:18:04.858 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-11-01 13:18:04.883544 | orchestrator | 13:18:04.883 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-11-01 13:18:04.889998 | orchestrator | 13:18:04.889 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-11-01 13:18:04.890055 | orchestrator | 13:18:04.889 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-11-01 13:18:04.901374 | orchestrator | 13:18:04.901 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-11-01 13:18:04.901622 | orchestrator | 13:18:04.901 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-11-01 13:18:04.924342 | orchestrator | 13:18:04.924 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-11-01 13:18:06.588214 | orchestrator | 13:18:06.587 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=839099ff-1dd9-44ac-aced-121877369047] 2025-11-01 13:18:06.602740 | orchestrator | 13:18:06.602 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-11-01 13:18:06.603733 | orchestrator | 13:18:06.603 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-11-01 13:18:06.604677 | orchestrator | 13:18:06.604 STDOUT terraform: local_file.inventory: Creating... 2025-11-01 13:18:06.615383 | orchestrator | 13:18:06.615 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=f0fdcbc760fd68ade1d4bac08f13fd93b1335854] 2025-11-01 13:18:06.615684 | orchestrator | 13:18:06.615 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=8b473c40e319001c76db793cdbe341970eceae91] 2025-11-01 13:18:07.604705 | orchestrator | 13:18:07.604 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=839099ff-1dd9-44ac-aced-121877369047] 2025-11-01 13:18:14.889827 | orchestrator | 13:18:14.889 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-11-01 13:18:14.910820 | orchestrator | 13:18:14.910 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-11-01 13:18:14.911895 | orchestrator | 13:18:14.911 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-11-01 13:18:14.923282 | orchestrator | 13:18:14.923 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-11-01 13:18:14.927491 | orchestrator | 13:18:14.927 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-11-01 13:18:14.928722 | orchestrator | 13:18:14.928 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-11-01 13:18:24.891866 | orchestrator | 13:18:24.891 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-11-01 13:18:24.910986 | orchestrator | 13:18:24.910 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-11-01 13:18:24.912047 | orchestrator | 13:18:24.911 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-11-01 13:18:24.924259 | orchestrator | 13:18:24.924 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-11-01 13:18:24.928520 | orchestrator | 13:18:24.928 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-11-01 13:18:24.929739 | orchestrator | 13:18:24.929 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-11-01 13:18:25.477912 | orchestrator | 13:18:25.477 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=0abff37f-11e0-4426-9bc3-c95bcb688390] 2025-11-01 13:18:25.559472 | orchestrator | 13:18:25.559 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=9174b79d-bca8-42e3-a999-2df4c130f98a] 2025-11-01 13:18:25.806080 | orchestrator | 13:18:25.805 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=a54ed414-e74a-46ea-8ea1-dfed1ccf6c79] 2025-11-01 13:18:34.913976 | orchestrator | 13:18:34.913 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-11-01 13:18:34.925112 | orchestrator | 13:18:34.924 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-11-01 13:18:34.930384 | orchestrator | 13:18:34.930 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-11-01 13:18:35.655926 | orchestrator | 13:18:35.655 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=b2050e74-c83f-45bd-9a87-f382cbe62bf8] 2025-11-01 13:18:36.447931 | orchestrator | 13:18:36.447 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=4319d528-ce45-44e1-9c7a-1696f9c0beb3] 2025-11-01 13:18:44.917363 | orchestrator | 13:18:44.917 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2025-11-01 13:18:45.810955 | orchestrator | 13:18:45.810 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 41s [id=9efefd47-c7b3-42b3-8cb4-91f8c418b7f6] 2025-11-01 13:18:45.841156 | orchestrator | 13:18:45.841 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-11-01 13:18:45.846913 | orchestrator | 13:18:45.846 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=8573878471942977794] 2025-11-01 13:18:45.849909 | orchestrator | 13:18:45.849 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-11-01 13:18:45.850203 | orchestrator | 13:18:45.850 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-11-01 13:18:45.850411 | orchestrator | 13:18:45.850 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-11-01 13:18:45.850896 | orchestrator | 13:18:45.850 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-11-01 13:18:45.851126 | orchestrator | 13:18:45.851 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-11-01 13:18:45.851339 | orchestrator | 13:18:45.851 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-11-01 13:18:45.855688 | orchestrator | 13:18:45.855 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-11-01 13:18:45.863094 | orchestrator | 13:18:45.862 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-11-01 13:18:45.878934 | orchestrator | 13:18:45.878 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-11-01 13:18:45.881647 | orchestrator | 13:18:45.881 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-11-01 13:18:49.381245 | orchestrator | 13:18:49.380 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=9174b79d-bca8-42e3-a999-2df4c130f98a/7d89e604-ccfa-4ce6-abe5-76180138882d] 2025-11-01 13:18:49.404248 | orchestrator | 13:18:49.403 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=0abff37f-11e0-4426-9bc3-c95bcb688390/f57a5620-543a-43ae-a22d-8a42cad6fb24] 2025-11-01 13:18:49.412188 | orchestrator | 13:18:49.411 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=0abff37f-11e0-4426-9bc3-c95bcb688390/c347dc72-435c-43d5-a9cf-2c60f1de142e] 2025-11-01 13:18:49.438735 | orchestrator | 13:18:49.438 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=9efefd47-c7b3-42b3-8cb4-91f8c418b7f6/072d7475-b9a0-4b66-89cc-e4fcf46016ff] 2025-11-01 13:18:49.451342 | orchestrator | 13:18:49.451 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=9efefd47-c7b3-42b3-8cb4-91f8c418b7f6/d5b7cda2-7cd1-4139-8c09-f2864ed6115a] 2025-11-01 13:18:49.470915 | orchestrator | 13:18:49.470 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=9174b79d-bca8-42e3-a999-2df4c130f98a/c17a8236-4766-4598-abab-5d58d5ce65a6] 2025-11-01 13:18:55.518553 | orchestrator | 13:18:55.518 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=9174b79d-bca8-42e3-a999-2df4c130f98a/4fee078c-1565-4ab1-bdda-b8bebdd42045] 2025-11-01 13:18:55.527717 | orchestrator | 13:18:55.527 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=0abff37f-11e0-4426-9bc3-c95bcb688390/dbba508b-4e10-452f-8431-011284f42e7d] 2025-11-01 13:18:55.547560 | orchestrator | 13:18:55.547 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=9efefd47-c7b3-42b3-8cb4-91f8c418b7f6/08ca9d91-9929-4ba3-9cad-ed75b64a043e] 2025-11-01 13:18:55.883899 | orchestrator | 13:18:55.883 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-11-01 13:19:05.888331 | orchestrator | 13:19:05.888 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-11-01 13:19:06.539922 | orchestrator | 13:19:06.539 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=38ad69dc-61f3-44b3-8ad9-2f03f6d4170f] 2025-11-01 13:19:06.561162 | orchestrator | 13:19:06.560 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-11-01 13:19:06.561235 | orchestrator | 13:19:06.561 STDOUT terraform: Outputs: 2025-11-01 13:19:06.561271 | orchestrator | 13:19:06.561 STDOUT terraform: manager_address = 2025-11-01 13:19:06.561284 | orchestrator | 13:19:06.561 STDOUT terraform: private_key = 2025-11-01 13:19:06.833905 | orchestrator | ok: Runtime: 0:01:19.938147 2025-11-01 13:19:06.875549 | 2025-11-01 13:19:06.875699 | TASK [Create infrastructure (stable)] 2025-11-01 13:19:07.412495 | orchestrator | skipping: Conditional result was False 2025-11-01 13:19:07.421829 | 2025-11-01 13:19:07.421970 | TASK [Fetch manager address] 2025-11-01 13:19:07.991517 | orchestrator | ok 2025-11-01 13:19:08.002889 | 2025-11-01 13:19:08.003019 | TASK [Set manager_host address] 2025-11-01 13:19:08.110826 | orchestrator | ok 2025-11-01 13:19:08.118540 | 2025-11-01 13:19:08.118661 | LOOP [Update ansible collections] 2025-11-01 13:19:10.676341 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-11-01 13:19:10.676558 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-11-01 13:19:10.676593 | orchestrator | Starting galaxy collection install process 2025-11-01 13:19:10.676618 | orchestrator | Process install dependency map 2025-11-01 13:19:10.676639 | orchestrator | Starting collection install process 2025-11-01 13:19:10.676660 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2025-11-01 13:19:10.676683 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2025-11-01 13:19:10.676707 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-11-01 13:19:10.676759 | orchestrator | ok: Item: commons Runtime: 0:00:02.253196 2025-11-01 13:19:11.563314 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-11-01 13:19:11.563415 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-11-01 13:19:11.563449 | orchestrator | Starting galaxy collection install process 2025-11-01 13:19:11.563479 | orchestrator | Process install dependency map 2025-11-01 13:19:11.563557 | orchestrator | Starting collection install process 2025-11-01 13:19:11.563585 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2025-11-01 13:19:11.563606 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2025-11-01 13:19:11.563626 | orchestrator | osism.services:999.0.0 was installed successfully 2025-11-01 13:19:11.563689 | orchestrator | ok: Item: services Runtime: 0:00:00.677159 2025-11-01 13:19:11.598507 | 2025-11-01 13:19:11.598619 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-11-01 13:19:22.083299 | orchestrator | ok 2025-11-01 13:19:22.094895 | 2025-11-01 13:19:22.095020 | TASK [Wait a little longer for the manager so that everything is ready] 2025-11-01 13:20:22.128736 | orchestrator | ok 2025-11-01 13:20:22.136158 | 2025-11-01 13:20:22.136279 | TASK [Fetch manager ssh hostkey] 2025-11-01 13:20:23.706625 | orchestrator | Output suppressed because no_log was given 2025-11-01 13:20:23.720221 | 2025-11-01 13:20:23.720403 | TASK [Get ssh keypair from terraform environment] 2025-11-01 13:20:24.276312 | orchestrator | ok: Runtime: 0:00:00.008265 2025-11-01 13:20:24.288036 | 2025-11-01 13:20:24.288184 | TASK [Point out that the following task takes some time and does not give any output] 2025-11-01 13:20:24.320525 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-11-01 13:20:24.327587 | 2025-11-01 13:20:24.327704 | TASK [Run manager part 0] 2025-11-01 13:20:25.277831 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-11-01 13:20:25.322066 | orchestrator | 2025-11-01 13:20:25.322105 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-11-01 13:20:25.322113 | orchestrator | 2025-11-01 13:20:25.322125 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-11-01 13:20:26.907131 | orchestrator | ok: [testbed-manager] 2025-11-01 13:20:26.907173 | orchestrator | 2025-11-01 13:20:26.907194 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-11-01 13:20:26.907205 | orchestrator | 2025-11-01 13:20:26.907216 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-01 13:20:29.189849 | orchestrator | ok: [testbed-manager] 2025-11-01 13:20:29.189894 | orchestrator | 2025-11-01 13:20:29.189905 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-11-01 13:20:29.730753 | orchestrator | ok: [testbed-manager] 2025-11-01 13:20:29.730810 | orchestrator | 2025-11-01 13:20:29.730821 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-11-01 13:20:29.773016 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:20:29.773093 | orchestrator | 2025-11-01 13:20:29.773115 | orchestrator | TASK [Update package cache] **************************************************** 2025-11-01 13:20:29.800402 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:20:29.800443 | orchestrator | 2025-11-01 13:20:29.800451 | orchestrator | TASK [Install required packages] *********************************************** 2025-11-01 13:20:29.828150 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:20:29.828186 | orchestrator | 2025-11-01 13:20:29.828192 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-11-01 13:20:29.855495 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:20:29.855524 | orchestrator | 2025-11-01 13:20:29.855529 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-11-01 13:20:29.876266 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:20:29.876294 | orchestrator | 2025-11-01 13:20:29.876301 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2025-11-01 13:20:29.903858 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:20:29.903897 | orchestrator | 2025-11-01 13:20:29.903907 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-11-01 13:20:29.946000 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:20:29.946070 | orchestrator | 2025-11-01 13:20:29.946079 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-11-01 13:20:30.630831 | orchestrator | changed: [testbed-manager] 2025-11-01 13:20:30.630871 | orchestrator | 2025-11-01 13:20:30.630877 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-11-01 13:22:57.869555 | orchestrator | changed: [testbed-manager] 2025-11-01 13:22:57.869661 | orchestrator | 2025-11-01 13:22:57.869680 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-11-01 13:24:23.424838 | orchestrator | changed: [testbed-manager] 2025-11-01 13:24:23.424885 | orchestrator | 2025-11-01 13:24:23.424894 | orchestrator | TASK [Install required packages] *********************************************** 2025-11-01 13:24:44.983922 | orchestrator | changed: [testbed-manager] 2025-11-01 13:24:44.984019 | orchestrator | 2025-11-01 13:24:44.984037 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-11-01 13:24:54.876235 | orchestrator | changed: [testbed-manager] 2025-11-01 13:24:54.876467 | orchestrator | 2025-11-01 13:24:54.876493 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-11-01 13:24:54.924241 | orchestrator | ok: [testbed-manager] 2025-11-01 13:24:54.924324 | orchestrator | 2025-11-01 13:24:54.924361 | orchestrator | TASK [Get current user] ******************************************************** 2025-11-01 13:24:55.729140 | orchestrator | ok: [testbed-manager] 2025-11-01 13:24:55.729216 | orchestrator | 2025-11-01 13:24:55.729233 | orchestrator | TASK [Create venv directory] *************************************************** 2025-11-01 13:24:56.492679 | orchestrator | changed: [testbed-manager] 2025-11-01 13:24:56.492755 | orchestrator | 2025-11-01 13:24:56.492773 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-11-01 13:25:04.051087 | orchestrator | changed: [testbed-manager] 2025-11-01 13:25:04.051162 | orchestrator | 2025-11-01 13:25:04.051197 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-11-01 13:25:11.295636 | orchestrator | changed: [testbed-manager] 2025-11-01 13:25:11.295705 | orchestrator | 2025-11-01 13:25:11.295722 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-11-01 13:25:14.437970 | orchestrator | changed: [testbed-manager] 2025-11-01 13:25:14.438104 | orchestrator | 2025-11-01 13:25:14.438123 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-11-01 13:25:16.522090 | orchestrator | changed: [testbed-manager] 2025-11-01 13:25:16.522396 | orchestrator | 2025-11-01 13:25:16.522418 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-11-01 13:25:17.722178 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-11-01 13:25:17.722255 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-11-01 13:25:17.722270 | orchestrator | 2025-11-01 13:25:17.722282 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-11-01 13:25:17.766517 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-11-01 13:25:17.766584 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-11-01 13:25:17.766598 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-11-01 13:25:17.766610 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-11-01 13:25:29.396936 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-11-01 13:25:29.396986 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-11-01 13:25:29.396995 | orchestrator | 2025-11-01 13:25:29.397002 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-11-01 13:25:30.054067 | orchestrator | changed: [testbed-manager] 2025-11-01 13:25:30.054141 | orchestrator | 2025-11-01 13:25:30.054155 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-11-01 13:25:50.641111 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-11-01 13:25:50.641159 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-11-01 13:25:50.641169 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-11-01 13:25:50.641176 | orchestrator | 2025-11-01 13:25:50.641183 | orchestrator | TASK [Install local collections] *********************************************** 2025-11-01 13:25:53.181815 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-11-01 13:25:53.182367 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-11-01 13:25:53.182392 | orchestrator | 2025-11-01 13:25:53.182405 | orchestrator | PLAY [Create operator user] **************************************************** 2025-11-01 13:25:53.182418 | orchestrator | 2025-11-01 13:25:53.182430 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-01 13:25:54.657044 | orchestrator | ok: [testbed-manager] 2025-11-01 13:25:54.657126 | orchestrator | 2025-11-01 13:25:54.657144 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-11-01 13:25:54.704430 | orchestrator | ok: [testbed-manager] 2025-11-01 13:25:54.704486 | orchestrator | 2025-11-01 13:25:54.704498 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-11-01 13:25:54.768751 | orchestrator | ok: [testbed-manager] 2025-11-01 13:25:54.768825 | orchestrator | 2025-11-01 13:25:54.768841 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-11-01 13:25:55.551726 | orchestrator | changed: [testbed-manager] 2025-11-01 13:25:55.551782 | orchestrator | 2025-11-01 13:25:55.551791 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-11-01 13:25:56.330811 | orchestrator | changed: [testbed-manager] 2025-11-01 13:25:56.330878 | orchestrator | 2025-11-01 13:25:56.330893 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-11-01 13:25:57.799608 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-11-01 13:25:57.799689 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-11-01 13:25:57.799705 | orchestrator | 2025-11-01 13:25:57.799740 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-11-01 13:25:59.133184 | orchestrator | changed: [testbed-manager] 2025-11-01 13:25:59.133225 | orchestrator | 2025-11-01 13:25:59.133231 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-11-01 13:26:01.010752 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-11-01 13:26:01.010802 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-11-01 13:26:01.010810 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-11-01 13:26:01.010818 | orchestrator | 2025-11-01 13:26:01.010827 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-11-01 13:26:01.063617 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:26:01.063665 | orchestrator | 2025-11-01 13:26:01.063672 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-11-01 13:26:01.639641 | orchestrator | changed: [testbed-manager] 2025-11-01 13:26:01.639679 | orchestrator | 2025-11-01 13:26:01.639688 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-11-01 13:26:01.706081 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:26:01.706121 | orchestrator | 2025-11-01 13:26:01.706129 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-11-01 13:26:02.564295 | orchestrator | changed: [testbed-manager] => (item=None) 2025-11-01 13:26:02.564334 | orchestrator | changed: [testbed-manager] 2025-11-01 13:26:02.564377 | orchestrator | 2025-11-01 13:26:02.564382 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-11-01 13:26:02.599150 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:26:02.599184 | orchestrator | 2025-11-01 13:26:02.599189 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-11-01 13:26:02.625195 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:26:02.625226 | orchestrator | 2025-11-01 13:26:02.625232 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-11-01 13:26:02.660492 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:26:02.660526 | orchestrator | 2025-11-01 13:26:02.660535 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-11-01 13:26:02.719454 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:26:02.719507 | orchestrator | 2025-11-01 13:26:02.719523 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-11-01 13:26:03.456696 | orchestrator | ok: [testbed-manager] 2025-11-01 13:26:03.456758 | orchestrator | 2025-11-01 13:26:03.456775 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-11-01 13:26:03.456790 | orchestrator | 2025-11-01 13:26:03.456803 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-01 13:26:04.908933 | orchestrator | ok: [testbed-manager] 2025-11-01 13:26:04.908983 | orchestrator | 2025-11-01 13:26:04.908994 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-11-01 13:26:05.930473 | orchestrator | changed: [testbed-manager] 2025-11-01 13:26:05.930555 | orchestrator | 2025-11-01 13:26:05.930571 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:26:05.930584 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-11-01 13:26:05.930596 | orchestrator | 2025-11-01 13:26:06.098654 | orchestrator | ok: Runtime: 0:05:41.389299 2025-11-01 13:26:06.113409 | 2025-11-01 13:26:06.113527 | TASK [Point out that the log in on the manager is now possible] 2025-11-01 13:26:06.160801 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-11-01 13:26:06.170485 | 2025-11-01 13:26:06.170615 | TASK [Point out that the following task takes some time and does not give any output] 2025-11-01 13:26:06.205955 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-11-01 13:26:06.217324 | 2025-11-01 13:26:06.217467 | TASK [Run manager part 1 + 2] 2025-11-01 13:26:07.318310 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-11-01 13:26:07.454808 | orchestrator | 2025-11-01 13:26:07.454874 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-11-01 13:26:07.454891 | orchestrator | 2025-11-01 13:26:07.454920 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-01 13:26:09.982540 | orchestrator | ok: [testbed-manager] 2025-11-01 13:26:09.982608 | orchestrator | 2025-11-01 13:26:09.982665 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-11-01 13:26:10.014334 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:26:10.014417 | orchestrator | 2025-11-01 13:26:10.014435 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-11-01 13:26:10.043164 | orchestrator | ok: [testbed-manager] 2025-11-01 13:26:10.043208 | orchestrator | 2025-11-01 13:26:10.043223 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-11-01 13:26:10.069625 | orchestrator | ok: [testbed-manager] 2025-11-01 13:26:10.069672 | orchestrator | 2025-11-01 13:26:10.069687 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-11-01 13:26:10.127984 | orchestrator | ok: [testbed-manager] 2025-11-01 13:26:10.128027 | orchestrator | 2025-11-01 13:26:10.128038 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-11-01 13:26:10.176117 | orchestrator | ok: [testbed-manager] 2025-11-01 13:26:10.176148 | orchestrator | 2025-11-01 13:26:10.176155 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-11-01 13:26:10.207583 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-11-01 13:26:10.207599 | orchestrator | 2025-11-01 13:26:10.207604 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-11-01 13:26:10.905300 | orchestrator | ok: [testbed-manager] 2025-11-01 13:26:10.905350 | orchestrator | 2025-11-01 13:26:10.905357 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-11-01 13:26:10.947744 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:26:10.947781 | orchestrator | 2025-11-01 13:26:10.947786 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-11-01 13:26:12.312605 | orchestrator | changed: [testbed-manager] 2025-11-01 13:26:12.312642 | orchestrator | 2025-11-01 13:26:12.312649 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-11-01 13:26:12.868542 | orchestrator | ok: [testbed-manager] 2025-11-01 13:26:12.868580 | orchestrator | 2025-11-01 13:26:12.868587 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-11-01 13:26:14.043414 | orchestrator | changed: [testbed-manager] 2025-11-01 13:26:14.043457 | orchestrator | 2025-11-01 13:26:14.043468 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-11-01 13:26:34.559620 | orchestrator | changed: [testbed-manager] 2025-11-01 13:26:34.559687 | orchestrator | 2025-11-01 13:26:34.559703 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-11-01 13:26:35.262846 | orchestrator | ok: [testbed-manager] 2025-11-01 13:26:35.262907 | orchestrator | 2025-11-01 13:26:35.262924 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-11-01 13:26:35.318136 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:26:35.318191 | orchestrator | 2025-11-01 13:26:35.318206 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-11-01 13:26:36.300514 | orchestrator | changed: [testbed-manager] 2025-11-01 13:26:36.300550 | orchestrator | 2025-11-01 13:26:36.300559 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-11-01 13:26:37.280490 | orchestrator | changed: [testbed-manager] 2025-11-01 13:26:37.280530 | orchestrator | 2025-11-01 13:26:37.280539 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-11-01 13:26:37.873213 | orchestrator | changed: [testbed-manager] 2025-11-01 13:26:37.873287 | orchestrator | 2025-11-01 13:26:37.873300 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-11-01 13:26:37.910796 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-11-01 13:26:37.910850 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-11-01 13:26:37.910857 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-11-01 13:26:37.910862 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-11-01 13:26:40.765453 | orchestrator | changed: [testbed-manager] 2025-11-01 13:26:40.765710 | orchestrator | 2025-11-01 13:26:40.765729 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-11-01 13:26:50.881510 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-11-01 13:26:50.881600 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-11-01 13:26:50.881617 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-11-01 13:26:50.881629 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-11-01 13:26:50.881649 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-11-01 13:26:50.881661 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-11-01 13:26:50.881673 | orchestrator | 2025-11-01 13:26:50.881685 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-11-01 13:26:51.910915 | orchestrator | changed: [testbed-manager] 2025-11-01 13:26:51.910993 | orchestrator | 2025-11-01 13:26:51.911008 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-11-01 13:26:51.954279 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:26:51.954383 | orchestrator | 2025-11-01 13:26:51.954400 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-11-01 13:26:54.565943 | orchestrator | changed: [testbed-manager] 2025-11-01 13:26:54.566096 | orchestrator | 2025-11-01 13:26:54.566117 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-11-01 13:26:54.606740 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:26:54.606794 | orchestrator | 2025-11-01 13:26:54.606807 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-11-01 13:28:45.210166 | orchestrator | changed: [testbed-manager] 2025-11-01 13:28:45.210264 | orchestrator | 2025-11-01 13:28:45.210284 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-11-01 13:28:46.509527 | orchestrator | ok: [testbed-manager] 2025-11-01 13:28:46.509561 | orchestrator | 2025-11-01 13:28:46.509568 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:28:46.509575 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-11-01 13:28:46.509580 | orchestrator | 2025-11-01 13:28:46.832287 | orchestrator | ok: Runtime: 0:02:40.072076 2025-11-01 13:28:46.849061 | 2025-11-01 13:28:46.849199 | TASK [Reboot manager] 2025-11-01 13:28:48.383143 | orchestrator | ok: Runtime: 0:00:01.053144 2025-11-01 13:28:48.397919 | 2025-11-01 13:28:48.398103 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-11-01 13:29:04.810263 | orchestrator | ok 2025-11-01 13:29:04.821521 | 2025-11-01 13:29:04.821653 | TASK [Wait a little longer for the manager so that everything is ready] 2025-11-01 13:30:04.867879 | orchestrator | ok 2025-11-01 13:30:04.878674 | 2025-11-01 13:30:04.878807 | TASK [Deploy manager + bootstrap nodes] 2025-11-01 13:30:07.726663 | orchestrator | 2025-11-01 13:30:07.726813 | orchestrator | # DEPLOY MANAGER 2025-11-01 13:30:07.726834 | orchestrator | 2025-11-01 13:30:07.726848 | orchestrator | + set -e 2025-11-01 13:30:07.726861 | orchestrator | + echo 2025-11-01 13:30:07.726875 | orchestrator | + echo '# DEPLOY MANAGER' 2025-11-01 13:30:07.726891 | orchestrator | + echo 2025-11-01 13:30:07.726938 | orchestrator | + cat /opt/manager-vars.sh 2025-11-01 13:30:07.730953 | orchestrator | export NUMBER_OF_NODES=6 2025-11-01 13:30:07.730982 | orchestrator | 2025-11-01 13:30:07.730997 | orchestrator | export CEPH_VERSION=reef 2025-11-01 13:30:07.731009 | orchestrator | export CONFIGURATION_VERSION=main 2025-11-01 13:30:07.731022 | orchestrator | export MANAGER_VERSION=latest 2025-11-01 13:30:07.731044 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-11-01 13:30:07.731055 | orchestrator | 2025-11-01 13:30:07.731073 | orchestrator | export ARA=false 2025-11-01 13:30:07.731084 | orchestrator | export DEPLOY_MODE=manager 2025-11-01 13:30:07.731101 | orchestrator | export TEMPEST=false 2025-11-01 13:30:07.731112 | orchestrator | export IS_ZUUL=true 2025-11-01 13:30:07.731123 | orchestrator | 2025-11-01 13:30:07.731140 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.208 2025-11-01 13:30:07.731152 | orchestrator | export EXTERNAL_API=false 2025-11-01 13:30:07.731162 | orchestrator | 2025-11-01 13:30:07.731173 | orchestrator | export IMAGE_USER=ubuntu 2025-11-01 13:30:07.731186 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-11-01 13:30:07.731196 | orchestrator | 2025-11-01 13:30:07.731207 | orchestrator | export CEPH_STACK=ceph-ansible 2025-11-01 13:30:07.731688 | orchestrator | 2025-11-01 13:30:07.731706 | orchestrator | + echo 2025-11-01 13:30:07.731723 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-11-01 13:30:07.733048 | orchestrator | ++ export INTERACTIVE=false 2025-11-01 13:30:07.733086 | orchestrator | ++ INTERACTIVE=false 2025-11-01 13:30:07.733098 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-11-01 13:30:07.733110 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-11-01 13:30:07.733125 | orchestrator | + source /opt/manager-vars.sh 2025-11-01 13:30:07.733156 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-11-01 13:30:07.733168 | orchestrator | ++ NUMBER_OF_NODES=6 2025-11-01 13:30:07.733179 | orchestrator | ++ export CEPH_VERSION=reef 2025-11-01 13:30:07.733190 | orchestrator | ++ CEPH_VERSION=reef 2025-11-01 13:30:07.733201 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-11-01 13:30:07.733213 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-11-01 13:30:07.733223 | orchestrator | ++ export MANAGER_VERSION=latest 2025-11-01 13:30:07.733253 | orchestrator | ++ MANAGER_VERSION=latest 2025-11-01 13:30:07.733269 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-11-01 13:30:07.733287 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-11-01 13:30:07.733298 | orchestrator | ++ export ARA=false 2025-11-01 13:30:07.733309 | orchestrator | ++ ARA=false 2025-11-01 13:30:07.733324 | orchestrator | ++ export DEPLOY_MODE=manager 2025-11-01 13:30:07.733354 | orchestrator | ++ DEPLOY_MODE=manager 2025-11-01 13:30:07.733658 | orchestrator | ++ export TEMPEST=false 2025-11-01 13:30:07.733733 | orchestrator | ++ TEMPEST=false 2025-11-01 13:30:07.733743 | orchestrator | ++ export IS_ZUUL=true 2025-11-01 13:30:07.733750 | orchestrator | ++ IS_ZUUL=true 2025-11-01 13:30:07.733757 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.208 2025-11-01 13:30:07.733765 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.208 2025-11-01 13:30:07.733773 | orchestrator | ++ export EXTERNAL_API=false 2025-11-01 13:30:07.733780 | orchestrator | ++ EXTERNAL_API=false 2025-11-01 13:30:07.733786 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-11-01 13:30:07.733793 | orchestrator | ++ IMAGE_USER=ubuntu 2025-11-01 13:30:07.733800 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-11-01 13:30:07.733814 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-11-01 13:30:07.733821 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-11-01 13:30:07.733828 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-11-01 13:30:07.733835 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-11-01 13:30:07.800044 | orchestrator | + docker version 2025-11-01 13:30:08.084024 | orchestrator | Client: Docker Engine - Community 2025-11-01 13:30:08.084065 | orchestrator | Version: 27.5.1 2025-11-01 13:30:08.084074 | orchestrator | API version: 1.47 2025-11-01 13:30:08.084081 | orchestrator | Go version: go1.22.11 2025-11-01 13:30:08.084088 | orchestrator | Git commit: 9f9e405 2025-11-01 13:30:08.084094 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-11-01 13:30:08.084102 | orchestrator | OS/Arch: linux/amd64 2025-11-01 13:30:08.084108 | orchestrator | Context: default 2025-11-01 13:30:08.084115 | orchestrator | 2025-11-01 13:30:08.084122 | orchestrator | Server: Docker Engine - Community 2025-11-01 13:30:08.084128 | orchestrator | Engine: 2025-11-01 13:30:08.084135 | orchestrator | Version: 27.5.1 2025-11-01 13:30:08.084142 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-11-01 13:30:08.084165 | orchestrator | Go version: go1.22.11 2025-11-01 13:30:08.084172 | orchestrator | Git commit: 4c9b3b0 2025-11-01 13:30:08.084178 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-11-01 13:30:08.084185 | orchestrator | OS/Arch: linux/amd64 2025-11-01 13:30:08.084192 | orchestrator | Experimental: false 2025-11-01 13:30:08.084198 | orchestrator | containerd: 2025-11-01 13:30:08.084205 | orchestrator | Version: v1.7.28 2025-11-01 13:30:08.084211 | orchestrator | GitCommit: b98a3aace656320842a23f4a392a33f46af97866 2025-11-01 13:30:08.084218 | orchestrator | runc: 2025-11-01 13:30:08.084225 | orchestrator | Version: 1.3.0 2025-11-01 13:30:08.084231 | orchestrator | GitCommit: v1.3.0-0-g4ca628d1 2025-11-01 13:30:08.084238 | orchestrator | docker-init: 2025-11-01 13:30:08.084244 | orchestrator | Version: 0.19.0 2025-11-01 13:30:08.084251 | orchestrator | GitCommit: de40ad0 2025-11-01 13:30:08.085953 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-11-01 13:30:08.092263 | orchestrator | + set -e 2025-11-01 13:30:08.092279 | orchestrator | + source /opt/manager-vars.sh 2025-11-01 13:30:08.092287 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-11-01 13:30:08.092295 | orchestrator | ++ NUMBER_OF_NODES=6 2025-11-01 13:30:08.092301 | orchestrator | ++ export CEPH_VERSION=reef 2025-11-01 13:30:08.092308 | orchestrator | ++ CEPH_VERSION=reef 2025-11-01 13:30:08.092314 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-11-01 13:30:08.092321 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-11-01 13:30:08.092327 | orchestrator | ++ export MANAGER_VERSION=latest 2025-11-01 13:30:08.092358 | orchestrator | ++ MANAGER_VERSION=latest 2025-11-01 13:30:08.092365 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-11-01 13:30:08.092372 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-11-01 13:30:08.092379 | orchestrator | ++ export ARA=false 2025-11-01 13:30:08.092385 | orchestrator | ++ ARA=false 2025-11-01 13:30:08.092392 | orchestrator | ++ export DEPLOY_MODE=manager 2025-11-01 13:30:08.092398 | orchestrator | ++ DEPLOY_MODE=manager 2025-11-01 13:30:08.092405 | orchestrator | ++ export TEMPEST=false 2025-11-01 13:30:08.092411 | orchestrator | ++ TEMPEST=false 2025-11-01 13:30:08.092418 | orchestrator | ++ export IS_ZUUL=true 2025-11-01 13:30:08.092424 | orchestrator | ++ IS_ZUUL=true 2025-11-01 13:30:08.092431 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.208 2025-11-01 13:30:08.092438 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.208 2025-11-01 13:30:08.092445 | orchestrator | ++ export EXTERNAL_API=false 2025-11-01 13:30:08.092451 | orchestrator | ++ EXTERNAL_API=false 2025-11-01 13:30:08.092457 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-11-01 13:30:08.092464 | orchestrator | ++ IMAGE_USER=ubuntu 2025-11-01 13:30:08.092471 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-11-01 13:30:08.092477 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-11-01 13:30:08.092484 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-11-01 13:30:08.092490 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-11-01 13:30:08.092497 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-11-01 13:30:08.092503 | orchestrator | ++ export INTERACTIVE=false 2025-11-01 13:30:08.092510 | orchestrator | ++ INTERACTIVE=false 2025-11-01 13:30:08.092516 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-11-01 13:30:08.092526 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-11-01 13:30:08.092532 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-11-01 13:30:08.092539 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-11-01 13:30:08.092545 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-11-01 13:30:08.095916 | orchestrator | + set -e 2025-11-01 13:30:08.095932 | orchestrator | + VERSION=reef 2025-11-01 13:30:08.096433 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-11-01 13:30:08.100262 | orchestrator | + [[ -n ceph_version: reef ]] 2025-11-01 13:30:08.100278 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-11-01 13:30:08.104475 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-11-01 13:30:08.108697 | orchestrator | + set -e 2025-11-01 13:30:08.108711 | orchestrator | + VERSION=2024.2 2025-11-01 13:30:08.109362 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-11-01 13:30:08.111802 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-11-01 13:30:08.111817 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-11-01 13:30:08.114789 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-11-01 13:30:08.115518 | orchestrator | ++ semver latest 7.0.0 2025-11-01 13:30:08.158873 | orchestrator | + [[ -1 -ge 0 ]] 2025-11-01 13:30:08.158904 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-11-01 13:30:08.158912 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-11-01 13:30:08.158950 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-11-01 13:30:08.227842 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-11-01 13:30:08.229532 | orchestrator | + source /opt/venv/bin/activate 2025-11-01 13:30:08.230955 | orchestrator | ++ deactivate nondestructive 2025-11-01 13:30:08.230968 | orchestrator | ++ '[' -n '' ']' 2025-11-01 13:30:08.230977 | orchestrator | ++ '[' -n '' ']' 2025-11-01 13:30:08.230987 | orchestrator | ++ hash -r 2025-11-01 13:30:08.231214 | orchestrator | ++ '[' -n '' ']' 2025-11-01 13:30:08.231225 | orchestrator | ++ unset VIRTUAL_ENV 2025-11-01 13:30:08.231232 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-11-01 13:30:08.231324 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-11-01 13:30:08.231450 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-11-01 13:30:08.231481 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-11-01 13:30:08.231490 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-11-01 13:30:08.231497 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-11-01 13:30:08.231603 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-11-01 13:30:08.231653 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-11-01 13:30:08.231728 | orchestrator | ++ export PATH 2025-11-01 13:30:08.231776 | orchestrator | ++ '[' -n '' ']' 2025-11-01 13:30:08.231862 | orchestrator | ++ '[' -z '' ']' 2025-11-01 13:30:08.231881 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-11-01 13:30:08.231889 | orchestrator | ++ PS1='(venv) ' 2025-11-01 13:30:08.231933 | orchestrator | ++ export PS1 2025-11-01 13:30:08.231945 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-11-01 13:30:08.231951 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-11-01 13:30:08.232030 | orchestrator | ++ hash -r 2025-11-01 13:30:08.232502 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-11-01 13:30:09.800744 | orchestrator | 2025-11-01 13:30:09.800832 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-11-01 13:30:09.800847 | orchestrator | 2025-11-01 13:30:09.800859 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-11-01 13:30:10.451371 | orchestrator | ok: [testbed-manager] 2025-11-01 13:30:10.451447 | orchestrator | 2025-11-01 13:30:10.451460 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-11-01 13:30:11.477837 | orchestrator | changed: [testbed-manager] 2025-11-01 13:30:11.477918 | orchestrator | 2025-11-01 13:30:11.477933 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-11-01 13:30:11.477946 | orchestrator | 2025-11-01 13:30:11.477957 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-01 13:30:14.168448 | orchestrator | ok: [testbed-manager] 2025-11-01 13:30:14.168513 | orchestrator | 2025-11-01 13:30:14.168526 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-11-01 13:30:14.222315 | orchestrator | ok: [testbed-manager] 2025-11-01 13:30:14.222363 | orchestrator | 2025-11-01 13:30:14.222374 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-11-01 13:30:14.698220 | orchestrator | changed: [testbed-manager] 2025-11-01 13:30:14.698276 | orchestrator | 2025-11-01 13:30:14.698290 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-11-01 13:30:14.746143 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:30:14.746172 | orchestrator | 2025-11-01 13:30:14.746184 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-11-01 13:30:15.144157 | orchestrator | changed: [testbed-manager] 2025-11-01 13:30:15.144234 | orchestrator | 2025-11-01 13:30:15.144247 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-11-01 13:30:15.203738 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:30:15.203821 | orchestrator | 2025-11-01 13:30:15.203835 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-11-01 13:30:15.546788 | orchestrator | ok: [testbed-manager] 2025-11-01 13:30:15.546865 | orchestrator | 2025-11-01 13:30:15.546880 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-11-01 13:30:15.679558 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:30:15.679623 | orchestrator | 2025-11-01 13:30:15.679636 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-11-01 13:30:15.679647 | orchestrator | 2025-11-01 13:30:15.679661 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-01 13:30:17.564501 | orchestrator | ok: [testbed-manager] 2025-11-01 13:30:17.564584 | orchestrator | 2025-11-01 13:30:17.564599 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-11-01 13:30:17.662950 | orchestrator | included: osism.services.traefik for testbed-manager 2025-11-01 13:30:17.662977 | orchestrator | 2025-11-01 13:30:17.662989 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-11-01 13:30:17.734425 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-11-01 13:30:17.734495 | orchestrator | 2025-11-01 13:30:17.734509 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-11-01 13:30:18.938522 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-11-01 13:30:18.938615 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-11-01 13:30:18.938631 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-11-01 13:30:18.938643 | orchestrator | 2025-11-01 13:30:18.938656 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-11-01 13:30:20.974681 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-11-01 13:30:20.974770 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-11-01 13:30:20.974783 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-11-01 13:30:20.974792 | orchestrator | 2025-11-01 13:30:20.974801 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-11-01 13:30:21.710072 | orchestrator | changed: [testbed-manager] => (item=None) 2025-11-01 13:30:21.710162 | orchestrator | changed: [testbed-manager] 2025-11-01 13:30:21.710178 | orchestrator | 2025-11-01 13:30:21.710192 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-11-01 13:30:22.406887 | orchestrator | changed: [testbed-manager] => (item=None) 2025-11-01 13:30:22.406977 | orchestrator | changed: [testbed-manager] 2025-11-01 13:30:22.406990 | orchestrator | 2025-11-01 13:30:22.407000 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-11-01 13:30:22.469947 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:30:22.469999 | orchestrator | 2025-11-01 13:30:22.470011 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-11-01 13:30:22.934200 | orchestrator | ok: [testbed-manager] 2025-11-01 13:30:22.934260 | orchestrator | 2025-11-01 13:30:22.934272 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-11-01 13:30:23.022555 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-11-01 13:30:23.022592 | orchestrator | 2025-11-01 13:30:23.022604 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-11-01 13:30:24.249370 | orchestrator | changed: [testbed-manager] 2025-11-01 13:30:24.249459 | orchestrator | 2025-11-01 13:30:24.249473 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-11-01 13:30:25.320143 | orchestrator | changed: [testbed-manager] 2025-11-01 13:30:25.320207 | orchestrator | 2025-11-01 13:30:25.320216 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-11-01 13:30:49.010527 | orchestrator | changed: [testbed-manager] 2025-11-01 13:30:49.010627 | orchestrator | 2025-11-01 13:30:49.010645 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-11-01 13:30:49.093361 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:30:49.093397 | orchestrator | 2025-11-01 13:30:49.093411 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-11-01 13:30:49.093422 | orchestrator | 2025-11-01 13:30:49.093434 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-01 13:30:51.133097 | orchestrator | ok: [testbed-manager] 2025-11-01 13:30:51.133190 | orchestrator | 2025-11-01 13:30:51.133237 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-11-01 13:30:51.286858 | orchestrator | included: osism.services.manager for testbed-manager 2025-11-01 13:30:51.286914 | orchestrator | 2025-11-01 13:30:51.286927 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-11-01 13:30:51.364160 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-11-01 13:30:51.364190 | orchestrator | 2025-11-01 13:30:51.364202 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-11-01 13:30:54.350956 | orchestrator | ok: [testbed-manager] 2025-11-01 13:30:54.351062 | orchestrator | 2025-11-01 13:30:54.351079 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-11-01 13:30:54.404044 | orchestrator | ok: [testbed-manager] 2025-11-01 13:30:54.404088 | orchestrator | 2025-11-01 13:30:54.404107 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-11-01 13:30:54.553176 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-11-01 13:30:54.553237 | orchestrator | 2025-11-01 13:30:54.553250 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-11-01 13:30:57.614983 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-11-01 13:30:57.615081 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-11-01 13:30:57.615095 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-11-01 13:30:57.615107 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-11-01 13:30:57.615118 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-11-01 13:30:57.615129 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-11-01 13:30:57.615141 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-11-01 13:30:57.615151 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-11-01 13:30:57.615162 | orchestrator | 2025-11-01 13:30:57.615175 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-11-01 13:30:58.329415 | orchestrator | changed: [testbed-manager] 2025-11-01 13:30:58.329512 | orchestrator | 2025-11-01 13:30:58.329524 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-11-01 13:30:59.076999 | orchestrator | changed: [testbed-manager] 2025-11-01 13:30:59.077095 | orchestrator | 2025-11-01 13:30:59.077112 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-11-01 13:30:59.180125 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-11-01 13:30:59.180195 | orchestrator | 2025-11-01 13:30:59.180209 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-11-01 13:31:00.575306 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-11-01 13:31:00.575443 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-11-01 13:31:00.575458 | orchestrator | 2025-11-01 13:31:00.575472 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-11-01 13:31:01.276964 | orchestrator | changed: [testbed-manager] 2025-11-01 13:31:01.277035 | orchestrator | 2025-11-01 13:31:01.277048 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-11-01 13:31:01.329417 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:31:01.329453 | orchestrator | 2025-11-01 13:31:01.329466 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-11-01 13:31:01.426295 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-11-01 13:31:01.426360 | orchestrator | 2025-11-01 13:31:01.426374 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-11-01 13:31:02.117209 | orchestrator | changed: [testbed-manager] 2025-11-01 13:31:02.117279 | orchestrator | 2025-11-01 13:31:02.117292 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-11-01 13:31:02.181136 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-11-01 13:31:02.181241 | orchestrator | 2025-11-01 13:31:02.181271 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-11-01 13:31:03.679479 | orchestrator | changed: [testbed-manager] => (item=None) 2025-11-01 13:31:03.679564 | orchestrator | changed: [testbed-manager] => (item=None) 2025-11-01 13:31:03.679577 | orchestrator | changed: [testbed-manager] 2025-11-01 13:31:03.679590 | orchestrator | 2025-11-01 13:31:03.679602 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-11-01 13:31:04.367839 | orchestrator | changed: [testbed-manager] 2025-11-01 13:31:04.367926 | orchestrator | 2025-11-01 13:31:04.367940 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-11-01 13:31:04.435943 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:31:04.436024 | orchestrator | 2025-11-01 13:31:04.436041 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-11-01 13:31:04.539686 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-11-01 13:31:04.539725 | orchestrator | 2025-11-01 13:31:04.539738 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-11-01 13:31:05.102691 | orchestrator | changed: [testbed-manager] 2025-11-01 13:31:05.102786 | orchestrator | 2025-11-01 13:31:05.102801 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-11-01 13:31:05.567061 | orchestrator | changed: [testbed-manager] 2025-11-01 13:31:05.567151 | orchestrator | 2025-11-01 13:31:05.567166 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-11-01 13:31:06.922794 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-11-01 13:31:06.922884 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-11-01 13:31:06.922901 | orchestrator | 2025-11-01 13:31:06.922915 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-11-01 13:31:07.667420 | orchestrator | changed: [testbed-manager] 2025-11-01 13:31:07.667513 | orchestrator | 2025-11-01 13:31:07.667530 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-11-01 13:31:08.132389 | orchestrator | ok: [testbed-manager] 2025-11-01 13:31:08.132459 | orchestrator | 2025-11-01 13:31:08.132473 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-11-01 13:31:08.555294 | orchestrator | changed: [testbed-manager] 2025-11-01 13:31:08.555423 | orchestrator | 2025-11-01 13:31:08.555450 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-11-01 13:31:08.605053 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:31:08.605099 | orchestrator | 2025-11-01 13:31:08.605111 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-11-01 13:31:08.684980 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-11-01 13:31:08.685013 | orchestrator | 2025-11-01 13:31:08.685025 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-11-01 13:31:08.737611 | orchestrator | ok: [testbed-manager] 2025-11-01 13:31:08.737665 | orchestrator | 2025-11-01 13:31:08.737676 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-11-01 13:31:10.918579 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-11-01 13:31:10.918663 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-11-01 13:31:10.918677 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-11-01 13:31:10.918687 | orchestrator | 2025-11-01 13:31:10.918698 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-11-01 13:31:11.755174 | orchestrator | changed: [testbed-manager] 2025-11-01 13:31:11.755248 | orchestrator | 2025-11-01 13:31:11.755264 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-11-01 13:31:12.546492 | orchestrator | changed: [testbed-manager] 2025-11-01 13:31:12.546587 | orchestrator | 2025-11-01 13:31:12.546604 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-11-01 13:31:13.327767 | orchestrator | changed: [testbed-manager] 2025-11-01 13:31:13.327841 | orchestrator | 2025-11-01 13:31:13.327855 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-11-01 13:31:13.408504 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-11-01 13:31:13.408557 | orchestrator | 2025-11-01 13:31:13.408569 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-11-01 13:31:13.452614 | orchestrator | ok: [testbed-manager] 2025-11-01 13:31:13.452691 | orchestrator | 2025-11-01 13:31:13.452708 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-11-01 13:31:14.209094 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-11-01 13:31:14.209186 | orchestrator | 2025-11-01 13:31:14.209201 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-11-01 13:31:14.315857 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-11-01 13:31:14.315895 | orchestrator | 2025-11-01 13:31:14.315907 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-11-01 13:31:15.063991 | orchestrator | changed: [testbed-manager] 2025-11-01 13:31:15.064075 | orchestrator | 2025-11-01 13:31:15.064089 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-11-01 13:31:15.740176 | orchestrator | ok: [testbed-manager] 2025-11-01 13:31:15.740266 | orchestrator | 2025-11-01 13:31:15.740281 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-11-01 13:31:15.792589 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:31:15.792617 | orchestrator | 2025-11-01 13:31:15.792628 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-11-01 13:31:15.857281 | orchestrator | ok: [testbed-manager] 2025-11-01 13:31:15.857306 | orchestrator | 2025-11-01 13:31:15.857317 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-11-01 13:31:16.761306 | orchestrator | changed: [testbed-manager] 2025-11-01 13:31:16.761434 | orchestrator | 2025-11-01 13:31:16.761448 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-11-01 13:32:32.094247 | orchestrator | changed: [testbed-manager] 2025-11-01 13:32:32.094403 | orchestrator | 2025-11-01 13:32:32.094423 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-11-01 13:32:33.172718 | orchestrator | ok: [testbed-manager] 2025-11-01 13:32:33.172820 | orchestrator | 2025-11-01 13:32:33.172837 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-11-01 13:32:33.283318 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:32:33.283390 | orchestrator | 2025-11-01 13:32:33.283406 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-11-01 13:32:36.092971 | orchestrator | changed: [testbed-manager] 2025-11-01 13:32:36.093077 | orchestrator | 2025-11-01 13:32:36.093095 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-11-01 13:32:36.147132 | orchestrator | ok: [testbed-manager] 2025-11-01 13:32:36.147163 | orchestrator | 2025-11-01 13:32:36.147175 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-11-01 13:32:36.147187 | orchestrator | 2025-11-01 13:32:36.147198 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-11-01 13:32:36.213828 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:32:36.213852 | orchestrator | 2025-11-01 13:32:36.213863 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-11-01 13:33:36.269243 | orchestrator | Pausing for 60 seconds 2025-11-01 13:33:36.269389 | orchestrator | changed: [testbed-manager] 2025-11-01 13:33:36.269405 | orchestrator | 2025-11-01 13:33:36.269418 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-11-01 13:33:40.937076 | orchestrator | changed: [testbed-manager] 2025-11-01 13:33:40.937166 | orchestrator | 2025-11-01 13:33:40.937180 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-11-01 13:34:43.292803 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-11-01 13:34:43.292930 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-11-01 13:34:43.292946 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2025-11-01 13:34:43.293044 | orchestrator | changed: [testbed-manager] 2025-11-01 13:34:43.293060 | orchestrator | 2025-11-01 13:34:43.293071 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-11-01 13:34:55.405774 | orchestrator | changed: [testbed-manager] 2025-11-01 13:34:55.405854 | orchestrator | 2025-11-01 13:34:55.405869 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-11-01 13:34:55.497579 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-11-01 13:34:55.497634 | orchestrator | 2025-11-01 13:34:55.497648 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-11-01 13:34:55.497660 | orchestrator | 2025-11-01 13:34:55.497671 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-11-01 13:34:55.540889 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:34:55.540944 | orchestrator | 2025-11-01 13:34:55.540960 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2025-11-01 13:34:55.631741 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2025-11-01 13:34:55.631787 | orchestrator | 2025-11-01 13:34:55.631799 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2025-11-01 13:34:56.470733 | orchestrator | changed: [testbed-manager] 2025-11-01 13:34:56.470786 | orchestrator | 2025-11-01 13:34:56.470800 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2025-11-01 13:35:00.386896 | orchestrator | ok: [testbed-manager] 2025-11-01 13:35:00.386944 | orchestrator | 2025-11-01 13:35:00.386973 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2025-11-01 13:35:00.470184 | orchestrator | ok: [testbed-manager] => { 2025-11-01 13:35:00.470213 | orchestrator | "version_check_result.stdout_lines": [ 2025-11-01 13:35:00.470225 | orchestrator | "=== OSISM Container Version Check ===", 2025-11-01 13:35:00.470237 | orchestrator | "Checking running containers against expected versions...", 2025-11-01 13:35:00.470248 | orchestrator | "", 2025-11-01 13:35:00.470259 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2025-11-01 13:35:00.470270 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2025-11-01 13:35:00.470281 | orchestrator | " Enabled: true", 2025-11-01 13:35:00.470292 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2025-11-01 13:35:00.470303 | orchestrator | " Status: ✅ MATCH", 2025-11-01 13:35:00.470314 | orchestrator | "", 2025-11-01 13:35:00.470344 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2025-11-01 13:35:00.470355 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2025-11-01 13:35:00.470366 | orchestrator | " Enabled: true", 2025-11-01 13:35:00.470377 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2025-11-01 13:35:00.470388 | orchestrator | " Status: ✅ MATCH", 2025-11-01 13:35:00.470399 | orchestrator | "", 2025-11-01 13:35:00.470410 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2025-11-01 13:35:00.470421 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2025-11-01 13:35:00.470432 | orchestrator | " Enabled: true", 2025-11-01 13:35:00.470443 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2025-11-01 13:35:00.470453 | orchestrator | " Status: ✅ MATCH", 2025-11-01 13:35:00.470464 | orchestrator | "", 2025-11-01 13:35:00.470475 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2025-11-01 13:35:00.470486 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2025-11-01 13:35:00.470497 | orchestrator | " Enabled: true", 2025-11-01 13:35:00.470508 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2025-11-01 13:35:00.470519 | orchestrator | " Status: ✅ MATCH", 2025-11-01 13:35:00.470530 | orchestrator | "", 2025-11-01 13:35:00.470541 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2025-11-01 13:35:00.470570 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2025-11-01 13:35:00.470582 | orchestrator | " Enabled: true", 2025-11-01 13:35:00.470592 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2025-11-01 13:35:00.470603 | orchestrator | " Status: ✅ MATCH", 2025-11-01 13:35:00.470614 | orchestrator | "", 2025-11-01 13:35:00.470624 | orchestrator | "Checking service: osismclient (OSISM Client)", 2025-11-01 13:35:00.470635 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-11-01 13:35:00.470646 | orchestrator | " Enabled: true", 2025-11-01 13:35:00.470657 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-11-01 13:35:00.470667 | orchestrator | " Status: ✅ MATCH", 2025-11-01 13:35:00.470678 | orchestrator | "", 2025-11-01 13:35:00.470689 | orchestrator | "Checking service: ara-server (ARA Server)", 2025-11-01 13:35:00.470700 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2025-11-01 13:35:00.470710 | orchestrator | " Enabled: true", 2025-11-01 13:35:00.470721 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2025-11-01 13:35:00.470732 | orchestrator | " Status: ✅ MATCH", 2025-11-01 13:35:00.470742 | orchestrator | "", 2025-11-01 13:35:00.470759 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2025-11-01 13:35:00.470771 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.3", 2025-11-01 13:35:00.470781 | orchestrator | " Enabled: true", 2025-11-01 13:35:00.470792 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.3", 2025-11-01 13:35:00.470806 | orchestrator | " Status: ✅ MATCH", 2025-11-01 13:35:00.470818 | orchestrator | "", 2025-11-01 13:35:00.470831 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2025-11-01 13:35:00.470843 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2025-11-01 13:35:00.470859 | orchestrator | " Enabled: true", 2025-11-01 13:35:00.470873 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2025-11-01 13:35:00.470885 | orchestrator | " Status: ✅ MATCH", 2025-11-01 13:35:00.470898 | orchestrator | "", 2025-11-01 13:35:00.470910 | orchestrator | "Checking service: redis (Redis Cache)", 2025-11-01 13:35:00.470923 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.5-alpine", 2025-11-01 13:35:00.470935 | orchestrator | " Enabled: true", 2025-11-01 13:35:00.470947 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.5-alpine", 2025-11-01 13:35:00.470959 | orchestrator | " Status: ✅ MATCH", 2025-11-01 13:35:00.470971 | orchestrator | "", 2025-11-01 13:35:00.470984 | orchestrator | "Checking service: api (OSISM API Service)", 2025-11-01 13:35:00.470996 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-11-01 13:35:00.471009 | orchestrator | " Enabled: true", 2025-11-01 13:35:00.471022 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-11-01 13:35:00.471034 | orchestrator | " Status: ✅ MATCH", 2025-11-01 13:35:00.471046 | orchestrator | "", 2025-11-01 13:35:00.471059 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2025-11-01 13:35:00.471072 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-11-01 13:35:00.471084 | orchestrator | " Enabled: true", 2025-11-01 13:35:00.471097 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-11-01 13:35:00.471109 | orchestrator | " Status: ✅ MATCH", 2025-11-01 13:35:00.471121 | orchestrator | "", 2025-11-01 13:35:00.471134 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2025-11-01 13:35:00.471147 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-11-01 13:35:00.471158 | orchestrator | " Enabled: true", 2025-11-01 13:35:00.471169 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-11-01 13:35:00.471180 | orchestrator | " Status: ✅ MATCH", 2025-11-01 13:35:00.471190 | orchestrator | "", 2025-11-01 13:35:00.471201 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2025-11-01 13:35:00.471211 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-11-01 13:35:00.471222 | orchestrator | " Enabled: true", 2025-11-01 13:35:00.471240 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-11-01 13:35:00.471251 | orchestrator | " Status: ✅ MATCH", 2025-11-01 13:35:00.471262 | orchestrator | "", 2025-11-01 13:35:00.471272 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2025-11-01 13:35:00.471293 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-11-01 13:35:00.471305 | orchestrator | " Enabled: true", 2025-11-01 13:35:00.471316 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-11-01 13:35:00.471353 | orchestrator | " Status: ✅ MATCH", 2025-11-01 13:35:00.471364 | orchestrator | "", 2025-11-01 13:35:00.471375 | orchestrator | "=== Summary ===", 2025-11-01 13:35:00.471385 | orchestrator | "Errors (version mismatches): 0", 2025-11-01 13:35:00.471396 | orchestrator | "Warnings (expected containers not running): 0", 2025-11-01 13:35:00.471406 | orchestrator | "", 2025-11-01 13:35:00.471417 | orchestrator | "✅ All running containers match expected versions!" 2025-11-01 13:35:00.471428 | orchestrator | ] 2025-11-01 13:35:00.471439 | orchestrator | } 2025-11-01 13:35:00.471450 | orchestrator | 2025-11-01 13:35:00.471462 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2025-11-01 13:35:00.530641 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:35:00.530680 | orchestrator | 2025-11-01 13:35:00.530693 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:35:00.530705 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-11-01 13:35:00.530716 | orchestrator | 2025-11-01 13:35:00.663089 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-11-01 13:35:00.663131 | orchestrator | + deactivate 2025-11-01 13:35:00.663144 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-11-01 13:35:00.663157 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-11-01 13:35:00.663168 | orchestrator | + export PATH 2025-11-01 13:35:00.663179 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-11-01 13:35:00.663191 | orchestrator | + '[' -n '' ']' 2025-11-01 13:35:00.663202 | orchestrator | + hash -r 2025-11-01 13:35:00.663213 | orchestrator | + '[' -n '' ']' 2025-11-01 13:35:00.663224 | orchestrator | + unset VIRTUAL_ENV 2025-11-01 13:35:00.663235 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-11-01 13:35:00.663246 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-11-01 13:35:00.663500 | orchestrator | + unset -f deactivate 2025-11-01 13:35:00.663522 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-11-01 13:35:00.670515 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-11-01 13:35:00.670538 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-11-01 13:35:00.670549 | orchestrator | + local max_attempts=60 2025-11-01 13:35:00.670560 | orchestrator | + local name=ceph-ansible 2025-11-01 13:35:00.670571 | orchestrator | + local attempt_num=1 2025-11-01 13:35:00.671425 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-01 13:35:00.704797 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-11-01 13:35:00.704909 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-11-01 13:35:00.704927 | orchestrator | + local max_attempts=60 2025-11-01 13:35:00.704939 | orchestrator | + local name=kolla-ansible 2025-11-01 13:35:00.704950 | orchestrator | + local attempt_num=1 2025-11-01 13:35:00.706440 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-11-01 13:35:00.746994 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-11-01 13:35:00.747020 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-11-01 13:35:00.747031 | orchestrator | + local max_attempts=60 2025-11-01 13:35:00.747042 | orchestrator | + local name=osism-ansible 2025-11-01 13:35:00.747053 | orchestrator | + local attempt_num=1 2025-11-01 13:35:00.747899 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-11-01 13:35:00.789141 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-11-01 13:35:00.789166 | orchestrator | + [[ true == \t\r\u\e ]] 2025-11-01 13:35:00.789177 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-11-01 13:35:01.552823 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-11-01 13:35:01.755254 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-11-01 13:35:01.755392 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2025-11-01 13:35:01.755421 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2025-11-01 13:35:01.755446 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2025-11-01 13:35:01.755473 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2025-11-01 13:35:01.755493 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2025-11-01 13:35:01.755529 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2025-11-01 13:35:01.755542 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2025-11-01 13:35:01.755553 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2025-11-01 13:35:01.755564 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2025-11-01 13:35:01.755574 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2025-11-01 13:35:01.755585 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2025-11-01 13:35:01.755598 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2025-11-01 13:35:01.755617 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2025-11-01 13:35:01.755635 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2025-11-01 13:35:01.755653 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2025-11-01 13:35:01.763466 | orchestrator | ++ semver latest 7.0.0 2025-11-01 13:35:01.825020 | orchestrator | + [[ -1 -ge 0 ]] 2025-11-01 13:35:01.825060 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-11-01 13:35:01.825072 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-11-01 13:35:01.830093 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-11-01 13:35:14.326556 | orchestrator | 2025-11-01 13:35:14 | INFO  | Task bfb8dce1-b29a-4a6b-9f66-662a9e7e5e10 (resolvconf) was prepared for execution. 2025-11-01 13:35:14.326670 | orchestrator | 2025-11-01 13:35:14 | INFO  | It takes a moment until task bfb8dce1-b29a-4a6b-9f66-662a9e7e5e10 (resolvconf) has been started and output is visible here. 2025-11-01 13:35:29.587019 | orchestrator | 2025-11-01 13:35:29.587086 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-11-01 13:35:29.587100 | orchestrator | 2025-11-01 13:35:29.587112 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-01 13:35:29.587123 | orchestrator | Saturday 01 November 2025 13:35:18 +0000 (0:00:00.157) 0:00:00.157 ***** 2025-11-01 13:35:29.587134 | orchestrator | ok: [testbed-manager] 2025-11-01 13:35:29.587146 | orchestrator | 2025-11-01 13:35:29.587157 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-11-01 13:35:29.587169 | orchestrator | Saturday 01 November 2025 13:35:22 +0000 (0:00:04.125) 0:00:04.282 ***** 2025-11-01 13:35:29.587179 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:35:29.587191 | orchestrator | 2025-11-01 13:35:29.587203 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-11-01 13:35:29.587213 | orchestrator | Saturday 01 November 2025 13:35:22 +0000 (0:00:00.070) 0:00:04.353 ***** 2025-11-01 13:35:29.587224 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-11-01 13:35:29.587236 | orchestrator | 2025-11-01 13:35:29.587255 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-11-01 13:35:29.587267 | orchestrator | Saturday 01 November 2025 13:35:23 +0000 (0:00:00.091) 0:00:04.444 ***** 2025-11-01 13:35:29.587278 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-11-01 13:35:29.587289 | orchestrator | 2025-11-01 13:35:29.587300 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-11-01 13:35:29.587311 | orchestrator | Saturday 01 November 2025 13:35:23 +0000 (0:00:00.092) 0:00:04.536 ***** 2025-11-01 13:35:29.587365 | orchestrator | ok: [testbed-manager] 2025-11-01 13:35:29.587377 | orchestrator | 2025-11-01 13:35:29.587387 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-11-01 13:35:29.587398 | orchestrator | Saturday 01 November 2025 13:35:24 +0000 (0:00:01.208) 0:00:05.744 ***** 2025-11-01 13:35:29.587409 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:35:29.587420 | orchestrator | 2025-11-01 13:35:29.587431 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-11-01 13:35:29.587441 | orchestrator | Saturday 01 November 2025 13:35:24 +0000 (0:00:00.070) 0:00:05.815 ***** 2025-11-01 13:35:29.587452 | orchestrator | ok: [testbed-manager] 2025-11-01 13:35:29.587462 | orchestrator | 2025-11-01 13:35:29.587473 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-11-01 13:35:29.587484 | orchestrator | Saturday 01 November 2025 13:35:25 +0000 (0:00:00.546) 0:00:06.362 ***** 2025-11-01 13:35:29.587494 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:35:29.587505 | orchestrator | 2025-11-01 13:35:29.587515 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-11-01 13:35:29.587527 | orchestrator | Saturday 01 November 2025 13:35:25 +0000 (0:00:00.073) 0:00:06.436 ***** 2025-11-01 13:35:29.587538 | orchestrator | changed: [testbed-manager] 2025-11-01 13:35:29.587548 | orchestrator | 2025-11-01 13:35:29.587559 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-11-01 13:35:29.587570 | orchestrator | Saturday 01 November 2025 13:35:25 +0000 (0:00:00.565) 0:00:07.001 ***** 2025-11-01 13:35:29.587581 | orchestrator | changed: [testbed-manager] 2025-11-01 13:35:29.587593 | orchestrator | 2025-11-01 13:35:29.587605 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-11-01 13:35:29.587617 | orchestrator | Saturday 01 November 2025 13:35:26 +0000 (0:00:01.193) 0:00:08.195 ***** 2025-11-01 13:35:29.587629 | orchestrator | ok: [testbed-manager] 2025-11-01 13:35:29.587641 | orchestrator | 2025-11-01 13:35:29.587654 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-11-01 13:35:29.587685 | orchestrator | Saturday 01 November 2025 13:35:27 +0000 (0:00:01.101) 0:00:09.296 ***** 2025-11-01 13:35:29.587697 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-11-01 13:35:29.587710 | orchestrator | 2025-11-01 13:35:29.587722 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-11-01 13:35:29.587735 | orchestrator | Saturday 01 November 2025 13:35:28 +0000 (0:00:00.093) 0:00:09.389 ***** 2025-11-01 13:35:29.587747 | orchestrator | changed: [testbed-manager] 2025-11-01 13:35:29.587759 | orchestrator | 2025-11-01 13:35:29.587771 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:35:29.587784 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-01 13:35:29.587797 | orchestrator | 2025-11-01 13:35:29.587810 | orchestrator | 2025-11-01 13:35:29.587822 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:35:29.587834 | orchestrator | Saturday 01 November 2025 13:35:29 +0000 (0:00:01.242) 0:00:10.631 ***** 2025-11-01 13:35:29.587846 | orchestrator | =============================================================================== 2025-11-01 13:35:29.587858 | orchestrator | Gathering Facts --------------------------------------------------------- 4.13s 2025-11-01 13:35:29.587869 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.24s 2025-11-01 13:35:29.587879 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.21s 2025-11-01 13:35:29.587890 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.19s 2025-11-01 13:35:29.587900 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.10s 2025-11-01 13:35:29.587911 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.57s 2025-11-01 13:35:29.587932 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.55s 2025-11-01 13:35:29.587944 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-11-01 13:35:29.587954 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-11-01 13:35:29.587965 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-11-01 13:35:29.587981 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2025-11-01 13:35:29.587992 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-11-01 13:35:29.588003 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-11-01 13:35:29.966229 | orchestrator | + osism apply sshconfig 2025-11-01 13:35:42.574763 | orchestrator | 2025-11-01 13:35:42 | INFO  | Task 946abdc1-132f-45f2-8d06-ab75a016aac1 (sshconfig) was prepared for execution. 2025-11-01 13:35:42.574872 | orchestrator | 2025-11-01 13:35:42 | INFO  | It takes a moment until task 946abdc1-132f-45f2-8d06-ab75a016aac1 (sshconfig) has been started and output is visible here. 2025-11-01 13:35:55.389579 | orchestrator | 2025-11-01 13:35:55.389655 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-11-01 13:35:55.389661 | orchestrator | 2025-11-01 13:35:55.389665 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-11-01 13:35:55.389670 | orchestrator | Saturday 01 November 2025 13:35:47 +0000 (0:00:00.170) 0:00:00.170 ***** 2025-11-01 13:35:55.389674 | orchestrator | ok: [testbed-manager] 2025-11-01 13:35:55.389679 | orchestrator | 2025-11-01 13:35:55.389683 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-11-01 13:35:55.389687 | orchestrator | Saturday 01 November 2025 13:35:47 +0000 (0:00:00.594) 0:00:00.764 ***** 2025-11-01 13:35:55.389691 | orchestrator | changed: [testbed-manager] 2025-11-01 13:35:55.389696 | orchestrator | 2025-11-01 13:35:55.389700 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-11-01 13:35:55.389721 | orchestrator | Saturday 01 November 2025 13:35:48 +0000 (0:00:00.585) 0:00:01.350 ***** 2025-11-01 13:35:55.389725 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-11-01 13:35:55.389729 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-11-01 13:35:55.389733 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-11-01 13:35:55.389737 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-11-01 13:35:55.389740 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-11-01 13:35:55.389744 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-11-01 13:35:55.389748 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-11-01 13:35:55.389752 | orchestrator | 2025-11-01 13:35:55.389756 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-11-01 13:35:55.389759 | orchestrator | Saturday 01 November 2025 13:35:54 +0000 (0:00:06.199) 0:00:07.549 ***** 2025-11-01 13:35:55.389763 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:35:55.389767 | orchestrator | 2025-11-01 13:35:55.389770 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-11-01 13:35:55.389774 | orchestrator | Saturday 01 November 2025 13:35:54 +0000 (0:00:00.085) 0:00:07.634 ***** 2025-11-01 13:35:55.389778 | orchestrator | changed: [testbed-manager] 2025-11-01 13:35:55.389781 | orchestrator | 2025-11-01 13:35:55.389785 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:35:55.389790 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 13:35:55.389795 | orchestrator | 2025-11-01 13:35:55.389798 | orchestrator | 2025-11-01 13:35:55.389802 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:35:55.389806 | orchestrator | Saturday 01 November 2025 13:35:55 +0000 (0:00:00.615) 0:00:08.250 ***** 2025-11-01 13:35:55.389810 | orchestrator | =============================================================================== 2025-11-01 13:35:55.389814 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.20s 2025-11-01 13:35:55.389818 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.62s 2025-11-01 13:35:55.389822 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.59s 2025-11-01 13:35:55.389825 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.59s 2025-11-01 13:35:55.389829 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.09s 2025-11-01 13:35:55.744711 | orchestrator | + osism apply known-hosts 2025-11-01 13:36:07.946587 | orchestrator | 2025-11-01 13:36:07 | INFO  | Task fcbf226f-c29e-4f7c-9b7a-66102bd64c23 (known-hosts) was prepared for execution. 2025-11-01 13:36:07.946674 | orchestrator | 2025-11-01 13:36:07 | INFO  | It takes a moment until task fcbf226f-c29e-4f7c-9b7a-66102bd64c23 (known-hosts) has been started and output is visible here. 2025-11-01 13:36:26.114568 | orchestrator | 2025-11-01 13:36:26.114640 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-11-01 13:36:26.114654 | orchestrator | 2025-11-01 13:36:26.114666 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-11-01 13:36:26.114678 | orchestrator | Saturday 01 November 2025 13:36:12 +0000 (0:00:00.170) 0:00:00.170 ***** 2025-11-01 13:36:26.114701 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-11-01 13:36:26.114713 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-11-01 13:36:26.114724 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-11-01 13:36:26.114735 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-11-01 13:36:26.114745 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-11-01 13:36:26.114756 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-11-01 13:36:26.114782 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-11-01 13:36:26.114883 | orchestrator | 2025-11-01 13:36:26.114908 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-11-01 13:36:26.114921 | orchestrator | Saturday 01 November 2025 13:36:18 +0000 (0:00:06.314) 0:00:06.484 ***** 2025-11-01 13:36:26.114933 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-11-01 13:36:26.114946 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-11-01 13:36:26.114957 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-11-01 13:36:26.114968 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-11-01 13:36:26.114980 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-11-01 13:36:26.114991 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-11-01 13:36:26.115001 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-11-01 13:36:26.115012 | orchestrator | 2025-11-01 13:36:26.115023 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-01 13:36:26.115034 | orchestrator | Saturday 01 November 2025 13:36:19 +0000 (0:00:00.179) 0:00:06.664 ***** 2025-11-01 13:36:26.115048 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCc+dxohciz01FiEHQw4vVmAJB09tAYGm2DOI/poWG0BRBXi3B4uVagR5BT1Yq4lQNNI1Xgk0Ms68bmJYTmxPmYZZoZyTWCV0efwRPvQBUToMqDGpuzBjBA3dk4wNnr+CJgrkvO/huOtAntXxJomGiwKx72DFVnlDFnN8Vepi1SNiYSsR86VVfN1Sy0K73EsXpD9a8Nx8V4HNdOF3QqSxqeB69v+pAzQb1AzptnGGMJ2XTlvm8Dhw9P7fGF+0NStEgr5iccssEQi54lFfPWknJKjXZPRD8VruRFI8APvpcr79+FWwuPJGg3d6au4o9AzFqbeEBoJ2aMO3S29vaKzV117wQct1XH8mx89/l/6vOp+MoPxWfQhtEy/+z9+5RY4DZTwiKtxMmbV9wI3Bw/JLqx9D864He3uFWnRRdupYN4aEhIsp+TkRUJUfqzfXpZ9Ikju++jyhZFk5oqk81isbAeu6FLxnIcA9m2ULAuOFvk/JUO0w/OxouOUofpuPl6ozk=) 2025-11-01 13:36:26.115063 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPjrQqqZL/PP3a0bTzOXDo35CjNvvnW9teB72U2Cd4BE1Nsuumozc9sa9OIgady3DHwW3rwEIMBZBKw/uFmIJps=) 2025-11-01 13:36:26.115076 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOR8IJpUIjQj+hdcsaPATHKiWm6T5941JNar+Xs47DtF) 2025-11-01 13:36:26.115104 | orchestrator | 2025-11-01 13:36:26.115117 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-01 13:36:26.115129 | orchestrator | Saturday 01 November 2025 13:36:20 +0000 (0:00:01.286) 0:00:07.950 ***** 2025-11-01 13:36:26.115161 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3xcbyEnN4cfqGXKDUKiqa8U4CHbtW8+ZSfHlXTss0F6rGqftIvGl3MJeQ2yJ7Lvh1rttnM0chZSnLbUdvbbxAYmSRyexpPpNPJk9w5i/tXuESOBBQW9wXfsP9v9remaIjNKRzP5fwxGGyfGLmttG8jCVqIE+V/23Lw+Vui5v6eaB6aHKTXKN5DFRhYFfcDzjZtwQsYiqG5iGLXwZScQkEBWVQx6vQ03L7NfI+GH0zMZTS4QNK6EFgnHMuLRd9oncEuc8lCBr7jKgaqkcIYx1ijFOtQAqRchGqs4/Fc8yYDGLndsWXKmL8yRtccZlS7qXCOZjAwXNfeK4aeLgbY/upFcrcbEMNjigv9EpAu3LEfqsUKmcs7XQ8S7zUONZWAV9KBs3BBgzViLi49dLzvMHDZ5sj3aIr6vk0jAgYleGxSMAxcokf5H+30tbQwaPgh1E0BYp7jNFWllF4rroGntTwHGx8rg6KkA81oRgJRNdXEYAQZYa9CNh3EC+pq4UM860=) 2025-11-01 13:36:26.115184 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFdG9ZJdnDhIviHMplrV8sD9JOT5PtOto320BIVEvNMJI6pWqqA6G+yEh5AQaOIcQm+jA9IrySuTWO/fI7P7Ktk=) 2025-11-01 13:36:26.115197 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP7XW9j7V7qjtu68uTuJfh1dO9O3N7L5cCGxQXAd8YNj) 2025-11-01 13:36:26.115209 | orchestrator | 2025-11-01 13:36:26.115222 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-01 13:36:26.115234 | orchestrator | Saturday 01 November 2025 13:36:21 +0000 (0:00:01.128) 0:00:09.079 ***** 2025-11-01 13:36:26.115352 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdGCeiW/8N8Zz8p+C7+3xTOYvEzODda5YKvgf4h3EPVbV9mgk2edIuyUtyxfjAIE0kmMGboIRBCGXXBJRj5YvyJdeS9bVPQkl1NqEB9ZhJLBn4sPgwed3yDB2OFUaGCTRsdaxspOgSotRp6XPTRx3OY8YD1ncebb7+1Xvre/+TlOCQOYY9hAfyvDPVPCZH+FgHUdGgaqfpbz81BeQxEkvKfqE1bfvy1m3McacT4Fp4pr7Y1tOJf1ELiNHkRZcPSZrfQ70QSqWHAkNp/FodHpIAFdGWQi8D3bu/E5eq+0jsG7eGj6JMt7EaE8dhP1bO3swwFv6jrv4YvR8NSOplLniMthdZuUKeQ5pslB1+ok2nBSXKG6EL0D/b6UeNfDY4tJyLT6V7Vi5OVPZ0o+HM2T5byH2m01jItRVyB41/NzUb8fEHOukGKfMPCzT9dGVZNMwFrhcuQfzxKM/3GhaVUv0GCY4rYF9+4xy0fg5QWoph5tGKDAq6pFy43LmKCPca1is=) 2025-11-01 13:36:26.115367 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDK0xU8MJocS4aKJhGrvtsjoA2OJRhtSLPZWkBzud0fZ2BU+vEsI6Q74YNFUNo4YsAQp/NnGZR4dXqFWmP4BU6g=) 2025-11-01 13:36:26.115380 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBDR4SMkO8V/2FkpoVcUZgLs3ys0GL3q8dqKY0oMPZbx) 2025-11-01 13:36:26.115392 | orchestrator | 2025-11-01 13:36:26.115404 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-01 13:36:26.115417 | orchestrator | Saturday 01 November 2025 13:36:22 +0000 (0:00:01.173) 0:00:10.252 ***** 2025-11-01 13:36:26.115429 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFPZ8vfaMnBlqze3ziQIF69FCHxOTp/41CeW8Wpxqrjpq6dFjMBm0oFg3tM8bAxX4BpE3dduyaGrTJxv+5ysmm0=) 2025-11-01 13:36:26.115464 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCCqsfAsc6I+27KwRjgtC8Bzn+l0aSVagjAdFvZ1HI4hUsn5DVoxoACeoCrNRCTHy6/LJpLMW+BXoKb5tETove0VQvUEc6AcaeanefGA9vYLW/sHHbZDGdQRgda6EREm8zDrlGz0Kh3JEMHgskEf4655UiJMEhxAqaZcWvMtWbsclAP1EyfqMBnhWUk7OkGssOlkqGuF/KOlufh/tnp7Zt1t6bGxWiX4hqbJjpBcPu5k/GKWyjsWwpGfkkmb+gIzmaPFc3FQgej1/1vgHpXJuJpP9Enr+z0rVU6aKEVIUMXISlom/LTr+pcCL5h1D+EazwC36+/M2rlQczDOfRnZabTkxJfCQi7/df3vqHJO57n/ljPOxNF6OEp6SIDc8LpmZwB123AG8nWaLZKswymYr7f0kWJG91X9wahNHEFXHzXDG6g5vfSaNBLhs0DeCceCV4c8uHHOXCQaO1mtJXTOI/4GeWQYY1Papew6xJalfoesLDXFoMoeWaXyQkf6Lw1Xls=) 2025-11-01 13:36:26.115477 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOT06qO4/sU7IU16Un3kx3E8hsZltOEwrYWZCG93qWJx) 2025-11-01 13:36:26.115488 | orchestrator | 2025-11-01 13:36:26.115499 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-01 13:36:26.115510 | orchestrator | Saturday 01 November 2025 13:36:23 +0000 (0:00:01.169) 0:00:11.422 ***** 2025-11-01 13:36:26.115520 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKLLcADEq3vBcbddlc3Gt9k9GPTkbV1L3dDLEtYFWiwJ50g4vofK//eOzPBHy2N64U4297/lxpdjAp4Avvq8miE=) 2025-11-01 13:36:26.115531 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCV31aJax6cjCZB9GMpFKJKN+2l3ZVbcekHs5FsbdqH0iYZz2fL3eiRWl1aD0kgow0+P+goW0/oXjE7qd+OF9Hloe1YcrsqQZa8B6yYTl70pdK0aD7MeZk57Z0cLzRFMNkvpxipZSX0QmZpU47qb2dj1mxE3/z9dtKBCV5Jih1qTzunwJbwaWmFwSn0Y9MbrU7+QaWtPBM2HFBx1eQpRb70V75jts5Sw0icbSXSyOuK8F5zQfd+6u4qUF0xgPUkVErpQ+sFszHeG/uhKO3DIBJi7sLR/FNQaXoqy8vPnoD99zGpb4BjJiBKQu6g+F0djq7efKtbpFunQN+Xhzj+InIOUShGx03N0LeCbRKE/c/rDmLQ3O1S7uUUlnRZsKEsDgRxtdQaMl9wKhZgjqWs/b/nu2UI9AReCHMNcq1BmoDAVNfQTCVz4As0aBp2cKJ0sat+UtNxcO78qNp7Aso2ICBjrPkDgVmD/13f2k/yOKSAF0bIjP4zxdcP5rhDAhKuTN8=) 2025-11-01 13:36:26.115563 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINyvZCGu56cJXHufPaPq1bCil0taw9PAPWGxIcipxizZ) 2025-11-01 13:36:26.115575 | orchestrator | 2025-11-01 13:36:26.115586 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-01 13:36:26.115597 | orchestrator | Saturday 01 November 2025 13:36:24 +0000 (0:00:01.186) 0:00:12.608 ***** 2025-11-01 13:36:26.115615 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKlxxBGUuPrD40NBhGu4khbS+13yX6gO1tkFNMo3nMkKii/2AUq0OWtv01yTnOPkJheNTmHV5RXJtbxMDZvxNYQ=) 2025-11-01 13:36:37.619624 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVFfx6Anu7Xi0359JV0ecjYLgwoe9YRSyre12UPVk9/EE7oqoOWWWXdQDi1cfvgCmWaQQBw1ek57fXQfoF4ym5NJ5OPiZ/YdTEZPKg2MxqnJWNBUxXXcHkYybYpGMwf8MI82QIYllXIM+sa2hxvJaxMo4Vimy6gS1I5whZyOZ4qo5guscyD5g4eRjbxYHDRh+26QCChwLPHjkvchfO6Ly24xYvSgw9Gwg1y+DPZeyeBsjeGQ2wQSGTahxghYypq47w4Ua+fPAK39ED0vxmaJcoZuRjzrJt8/OdsfgMYcDGy+UDF3df8zXtAagZ6SXPPIO5vRwJ1/vUroxtDhKLerTVxxocmt5Ces6tZ4ipGEYO92lukiGOowaR7Ym4sLqTZGMOqgRFpR4ZcJX1IPk5cv9m6LhPYxoI29cvtZXFJMPrNO3f9PPwZ+Xo7aotQNY7rn0lrHKIWAPQUxioccZH0GBP1oRLfI3siD8QqZif5dzcKzi2zScg7hsFw43AymlUjGU=) 2025-11-01 13:36:37.619733 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAyuducmIZCsZ4nG0SmNzxM3c/Nde6Ui6shOJTL5Jx89) 2025-11-01 13:36:37.619751 | orchestrator | 2025-11-01 13:36:37.619764 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-01 13:36:37.619776 | orchestrator | Saturday 01 November 2025 13:36:26 +0000 (0:00:01.142) 0:00:13.750 ***** 2025-11-01 13:36:37.619787 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKj66ettQ6kXawZwo+Bil/Eg1ZM0N68xK6KbiNYvDg2l) 2025-11-01 13:36:37.619800 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCHbaHO8gcYu9WU/BWt+qUHCCGU+g7lbGmYBZbHfXFRG02L0Cubb3F1TYnIUcwCuknvnt5TcNVY3ayM0UL2EtAv4I4Fljmfke9nUV1JY0bYHaY5FKj55z5JooEG/bP+SI14Gg+K5bh/vJNWZ0YGCbDEDD7y2xMt9W5V/pFmwDOwkXIPhI9SXfFhiOYulemkwtRqsLj393lHotU6GbMF2ShfnMhJpUmLwT2FI5evOBt5IwuiwVOPdCHbNfKMJ9ij89LhlQUta1z3JlBqBc8Q8kg0ea50NmwBrUnps0bfrFWtOjL8Z976gpYjKqFh5n6OnmZqhnMn0DGICyReANkSVfAtJutiukmQuGZ5+qXM3toWMalcC2L+TXyo5VxieQE69DRiJUfgrQjyvjKmE8aze4UaWrRnkKsnUtksV+1I32dkP9DjVYp28P6vyPXBcBHV+9CknmqH/PP1oqpUNEXKh1meT9bxU5+SzuD40Rq0t3ZTm+psVxXe3XhGDzKOfH0ZZz8=) 2025-11-01 13:36:37.619812 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH7j3n1p9hXJqKDZ2s753kcRVi9rF0NFfXH5BX/YMWt1f8DPT+KW7XDo7Rx+6CIn39/sFXi4225K0QSjtXWWOmQ=) 2025-11-01 13:36:37.619825 | orchestrator | 2025-11-01 13:36:37.619853 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-11-01 13:36:37.619865 | orchestrator | Saturday 01 November 2025 13:36:27 +0000 (0:00:01.167) 0:00:14.918 ***** 2025-11-01 13:36:37.619877 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-11-01 13:36:37.619888 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-11-01 13:36:37.619899 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-11-01 13:36:37.619909 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-11-01 13:36:37.619921 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-11-01 13:36:37.619932 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-11-01 13:36:37.619942 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-11-01 13:36:37.619953 | orchestrator | 2025-11-01 13:36:37.619964 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-11-01 13:36:37.619976 | orchestrator | Saturday 01 November 2025 13:36:32 +0000 (0:00:05.468) 0:00:20.387 ***** 2025-11-01 13:36:37.620011 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-11-01 13:36:37.620025 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-11-01 13:36:37.620036 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-11-01 13:36:37.620047 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-11-01 13:36:37.620058 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-11-01 13:36:37.620069 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-11-01 13:36:37.620080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-11-01 13:36:37.620091 | orchestrator | 2025-11-01 13:36:37.620118 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-01 13:36:37.620130 | orchestrator | Saturday 01 November 2025 13:36:32 +0000 (0:00:00.198) 0:00:20.585 ***** 2025-11-01 13:36:37.620142 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCc+dxohciz01FiEHQw4vVmAJB09tAYGm2DOI/poWG0BRBXi3B4uVagR5BT1Yq4lQNNI1Xgk0Ms68bmJYTmxPmYZZoZyTWCV0efwRPvQBUToMqDGpuzBjBA3dk4wNnr+CJgrkvO/huOtAntXxJomGiwKx72DFVnlDFnN8Vepi1SNiYSsR86VVfN1Sy0K73EsXpD9a8Nx8V4HNdOF3QqSxqeB69v+pAzQb1AzptnGGMJ2XTlvm8Dhw9P7fGF+0NStEgr5iccssEQi54lFfPWknJKjXZPRD8VruRFI8APvpcr79+FWwuPJGg3d6au4o9AzFqbeEBoJ2aMO3S29vaKzV117wQct1XH8mx89/l/6vOp+MoPxWfQhtEy/+z9+5RY4DZTwiKtxMmbV9wI3Bw/JLqx9D864He3uFWnRRdupYN4aEhIsp+TkRUJUfqzfXpZ9Ikju++jyhZFk5oqk81isbAeu6FLxnIcA9m2ULAuOFvk/JUO0w/OxouOUofpuPl6ozk=) 2025-11-01 13:36:37.620154 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPjrQqqZL/PP3a0bTzOXDo35CjNvvnW9teB72U2Cd4BE1Nsuumozc9sa9OIgady3DHwW3rwEIMBZBKw/uFmIJps=) 2025-11-01 13:36:37.620165 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOR8IJpUIjQj+hdcsaPATHKiWm6T5941JNar+Xs47DtF) 2025-11-01 13:36:37.620175 | orchestrator | 2025-11-01 13:36:37.620186 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-01 13:36:37.620197 | orchestrator | Saturday 01 November 2025 13:36:34 +0000 (0:00:01.164) 0:00:21.750 ***** 2025-11-01 13:36:37.620208 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFdG9ZJdnDhIviHMplrV8sD9JOT5PtOto320BIVEvNMJI6pWqqA6G+yEh5AQaOIcQm+jA9IrySuTWO/fI7P7Ktk=) 2025-11-01 13:36:37.620220 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3xcbyEnN4cfqGXKDUKiqa8U4CHbtW8+ZSfHlXTss0F6rGqftIvGl3MJeQ2yJ7Lvh1rttnM0chZSnLbUdvbbxAYmSRyexpPpNPJk9w5i/tXuESOBBQW9wXfsP9v9remaIjNKRzP5fwxGGyfGLmttG8jCVqIE+V/23Lw+Vui5v6eaB6aHKTXKN5DFRhYFfcDzjZtwQsYiqG5iGLXwZScQkEBWVQx6vQ03L7NfI+GH0zMZTS4QNK6EFgnHMuLRd9oncEuc8lCBr7jKgaqkcIYx1ijFOtQAqRchGqs4/Fc8yYDGLndsWXKmL8yRtccZlS7qXCOZjAwXNfeK4aeLgbY/upFcrcbEMNjigv9EpAu3LEfqsUKmcs7XQ8S7zUONZWAV9KBs3BBgzViLi49dLzvMHDZ5sj3aIr6vk0jAgYleGxSMAxcokf5H+30tbQwaPgh1E0BYp7jNFWllF4rroGntTwHGx8rg6KkA81oRgJRNdXEYAQZYa9CNh3EC+pq4UM860=) 2025-11-01 13:36:37.620231 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP7XW9j7V7qjtu68uTuJfh1dO9O3N7L5cCGxQXAd8YNj) 2025-11-01 13:36:37.620250 | orchestrator | 2025-11-01 13:36:37.620262 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-01 13:36:37.620272 | orchestrator | Saturday 01 November 2025 13:36:35 +0000 (0:00:01.137) 0:00:22.888 ***** 2025-11-01 13:36:37.620283 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdGCeiW/8N8Zz8p+C7+3xTOYvEzODda5YKvgf4h3EPVbV9mgk2edIuyUtyxfjAIE0kmMGboIRBCGXXBJRj5YvyJdeS9bVPQkl1NqEB9ZhJLBn4sPgwed3yDB2OFUaGCTRsdaxspOgSotRp6XPTRx3OY8YD1ncebb7+1Xvre/+TlOCQOYY9hAfyvDPVPCZH+FgHUdGgaqfpbz81BeQxEkvKfqE1bfvy1m3McacT4Fp4pr7Y1tOJf1ELiNHkRZcPSZrfQ70QSqWHAkNp/FodHpIAFdGWQi8D3bu/E5eq+0jsG7eGj6JMt7EaE8dhP1bO3swwFv6jrv4YvR8NSOplLniMthdZuUKeQ5pslB1+ok2nBSXKG6EL0D/b6UeNfDY4tJyLT6V7Vi5OVPZ0o+HM2T5byH2m01jItRVyB41/NzUb8fEHOukGKfMPCzT9dGVZNMwFrhcuQfzxKM/3GhaVUv0GCY4rYF9+4xy0fg5QWoph5tGKDAq6pFy43LmKCPca1is=) 2025-11-01 13:36:37.620295 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDK0xU8MJocS4aKJhGrvtsjoA2OJRhtSLPZWkBzud0fZ2BU+vEsI6Q74YNFUNo4YsAQp/NnGZR4dXqFWmP4BU6g=) 2025-11-01 13:36:37.620306 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBDR4SMkO8V/2FkpoVcUZgLs3ys0GL3q8dqKY0oMPZbx) 2025-11-01 13:36:37.620347 | orchestrator | 2025-11-01 13:36:37.620361 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-01 13:36:37.620373 | orchestrator | Saturday 01 November 2025 13:36:36 +0000 (0:00:01.182) 0:00:24.070 ***** 2025-11-01 13:36:37.620399 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCCqsfAsc6I+27KwRjgtC8Bzn+l0aSVagjAdFvZ1HI4hUsn5DVoxoACeoCrNRCTHy6/LJpLMW+BXoKb5tETove0VQvUEc6AcaeanefGA9vYLW/sHHbZDGdQRgda6EREm8zDrlGz0Kh3JEMHgskEf4655UiJMEhxAqaZcWvMtWbsclAP1EyfqMBnhWUk7OkGssOlkqGuF/KOlufh/tnp7Zt1t6bGxWiX4hqbJjpBcPu5k/GKWyjsWwpGfkkmb+gIzmaPFc3FQgej1/1vgHpXJuJpP9Enr+z0rVU6aKEVIUMXISlom/LTr+pcCL5h1D+EazwC36+/M2rlQczDOfRnZabTkxJfCQi7/df3vqHJO57n/ljPOxNF6OEp6SIDc8LpmZwB123AG8nWaLZKswymYr7f0kWJG91X9wahNHEFXHzXDG6g5vfSaNBLhs0DeCceCV4c8uHHOXCQaO1mtJXTOI/4GeWQYY1Papew6xJalfoesLDXFoMoeWaXyQkf6Lw1Xls=) 2025-11-01 13:36:42.608646 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOT06qO4/sU7IU16Un3kx3E8hsZltOEwrYWZCG93qWJx) 2025-11-01 13:36:42.608716 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFPZ8vfaMnBlqze3ziQIF69FCHxOTp/41CeW8Wpxqrjpq6dFjMBm0oFg3tM8bAxX4BpE3dduyaGrTJxv+5ysmm0=) 2025-11-01 13:36:42.608730 | orchestrator | 2025-11-01 13:36:42.608743 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-01 13:36:42.608756 | orchestrator | Saturday 01 November 2025 13:36:37 +0000 (0:00:01.184) 0:00:25.254 ***** 2025-11-01 13:36:42.608767 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKLLcADEq3vBcbddlc3Gt9k9GPTkbV1L3dDLEtYFWiwJ50g4vofK//eOzPBHy2N64U4297/lxpdjAp4Avvq8miE=) 2025-11-01 13:36:42.608780 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCV31aJax6cjCZB9GMpFKJKN+2l3ZVbcekHs5FsbdqH0iYZz2fL3eiRWl1aD0kgow0+P+goW0/oXjE7qd+OF9Hloe1YcrsqQZa8B6yYTl70pdK0aD7MeZk57Z0cLzRFMNkvpxipZSX0QmZpU47qb2dj1mxE3/z9dtKBCV5Jih1qTzunwJbwaWmFwSn0Y9MbrU7+QaWtPBM2HFBx1eQpRb70V75jts5Sw0icbSXSyOuK8F5zQfd+6u4qUF0xgPUkVErpQ+sFszHeG/uhKO3DIBJi7sLR/FNQaXoqy8vPnoD99zGpb4BjJiBKQu6g+F0djq7efKtbpFunQN+Xhzj+InIOUShGx03N0LeCbRKE/c/rDmLQ3O1S7uUUlnRZsKEsDgRxtdQaMl9wKhZgjqWs/b/nu2UI9AReCHMNcq1BmoDAVNfQTCVz4As0aBp2cKJ0sat+UtNxcO78qNp7Aso2ICBjrPkDgVmD/13f2k/yOKSAF0bIjP4zxdcP5rhDAhKuTN8=) 2025-11-01 13:36:42.608794 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINyvZCGu56cJXHufPaPq1bCil0taw9PAPWGxIcipxizZ) 2025-11-01 13:36:42.608805 | orchestrator | 2025-11-01 13:36:42.608816 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-01 13:36:42.608846 | orchestrator | Saturday 01 November 2025 13:36:38 +0000 (0:00:01.161) 0:00:26.415 ***** 2025-11-01 13:36:42.608870 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVFfx6Anu7Xi0359JV0ecjYLgwoe9YRSyre12UPVk9/EE7oqoOWWWXdQDi1cfvgCmWaQQBw1ek57fXQfoF4ym5NJ5OPiZ/YdTEZPKg2MxqnJWNBUxXXcHkYybYpGMwf8MI82QIYllXIM+sa2hxvJaxMo4Vimy6gS1I5whZyOZ4qo5guscyD5g4eRjbxYHDRh+26QCChwLPHjkvchfO6Ly24xYvSgw9Gwg1y+DPZeyeBsjeGQ2wQSGTahxghYypq47w4Ua+fPAK39ED0vxmaJcoZuRjzrJt8/OdsfgMYcDGy+UDF3df8zXtAagZ6SXPPIO5vRwJ1/vUroxtDhKLerTVxxocmt5Ces6tZ4ipGEYO92lukiGOowaR7Ym4sLqTZGMOqgRFpR4ZcJX1IPk5cv9m6LhPYxoI29cvtZXFJMPrNO3f9PPwZ+Xo7aotQNY7rn0lrHKIWAPQUxioccZH0GBP1oRLfI3siD8QqZif5dzcKzi2zScg7hsFw43AymlUjGU=) 2025-11-01 13:36:42.608882 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAyuducmIZCsZ4nG0SmNzxM3c/Nde6Ui6shOJTL5Jx89) 2025-11-01 13:36:42.608894 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKlxxBGUuPrD40NBhGu4khbS+13yX6gO1tkFNMo3nMkKii/2AUq0OWtv01yTnOPkJheNTmHV5RXJtbxMDZvxNYQ=) 2025-11-01 13:36:42.608905 | orchestrator | 2025-11-01 13:36:42.608916 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-01 13:36:42.608926 | orchestrator | Saturday 01 November 2025 13:36:39 +0000 (0:00:01.150) 0:00:27.565 ***** 2025-11-01 13:36:42.608937 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKj66ettQ6kXawZwo+Bil/Eg1ZM0N68xK6KbiNYvDg2l) 2025-11-01 13:36:42.608948 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCHbaHO8gcYu9WU/BWt+qUHCCGU+g7lbGmYBZbHfXFRG02L0Cubb3F1TYnIUcwCuknvnt5TcNVY3ayM0UL2EtAv4I4Fljmfke9nUV1JY0bYHaY5FKj55z5JooEG/bP+SI14Gg+K5bh/vJNWZ0YGCbDEDD7y2xMt9W5V/pFmwDOwkXIPhI9SXfFhiOYulemkwtRqsLj393lHotU6GbMF2ShfnMhJpUmLwT2FI5evOBt5IwuiwVOPdCHbNfKMJ9ij89LhlQUta1z3JlBqBc8Q8kg0ea50NmwBrUnps0bfrFWtOjL8Z976gpYjKqFh5n6OnmZqhnMn0DGICyReANkSVfAtJutiukmQuGZ5+qXM3toWMalcC2L+TXyo5VxieQE69DRiJUfgrQjyvjKmE8aze4UaWrRnkKsnUtksV+1I32dkP9DjVYp28P6vyPXBcBHV+9CknmqH/PP1oqpUNEXKh1meT9bxU5+SzuD40Rq0t3ZTm+psVxXe3XhGDzKOfH0ZZz8=) 2025-11-01 13:36:42.608960 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH7j3n1p9hXJqKDZ2s753kcRVi9rF0NFfXH5BX/YMWt1f8DPT+KW7XDo7Rx+6CIn39/sFXi4225K0QSjtXWWOmQ=) 2025-11-01 13:36:42.608971 | orchestrator | 2025-11-01 13:36:42.608982 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-11-01 13:36:42.608992 | orchestrator | Saturday 01 November 2025 13:36:41 +0000 (0:00:01.187) 0:00:28.753 ***** 2025-11-01 13:36:42.609004 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-11-01 13:36:42.609015 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-11-01 13:36:42.609037 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-11-01 13:36:42.609049 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-11-01 13:36:42.609059 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-11-01 13:36:42.609070 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-11-01 13:36:42.609080 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-11-01 13:36:42.609091 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:36:42.609102 | orchestrator | 2025-11-01 13:36:42.609113 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-11-01 13:36:42.609124 | orchestrator | Saturday 01 November 2025 13:36:41 +0000 (0:00:00.201) 0:00:28.955 ***** 2025-11-01 13:36:42.609134 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:36:42.609145 | orchestrator | 2025-11-01 13:36:42.609156 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-11-01 13:36:42.609166 | orchestrator | Saturday 01 November 2025 13:36:41 +0000 (0:00:00.060) 0:00:29.015 ***** 2025-11-01 13:36:42.609183 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:36:42.609194 | orchestrator | 2025-11-01 13:36:42.609205 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-11-01 13:36:42.609215 | orchestrator | Saturday 01 November 2025 13:36:41 +0000 (0:00:00.065) 0:00:29.081 ***** 2025-11-01 13:36:42.609226 | orchestrator | changed: [testbed-manager] 2025-11-01 13:36:42.609236 | orchestrator | 2025-11-01 13:36:42.609247 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:36:42.609258 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-01 13:36:42.609270 | orchestrator | 2025-11-01 13:36:42.609280 | orchestrator | 2025-11-01 13:36:42.609291 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:36:42.609302 | orchestrator | Saturday 01 November 2025 13:36:42 +0000 (0:00:00.852) 0:00:29.933 ***** 2025-11-01 13:36:42.609313 | orchestrator | =============================================================================== 2025-11-01 13:36:42.609354 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.31s 2025-11-01 13:36:42.609365 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.47s 2025-11-01 13:36:42.609376 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.29s 2025-11-01 13:36:42.609387 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2025-11-01 13:36:42.609398 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2025-11-01 13:36:42.609409 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2025-11-01 13:36:42.609419 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2025-11-01 13:36:42.609430 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2025-11-01 13:36:42.609441 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2025-11-01 13:36:42.609452 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2025-11-01 13:36:42.609469 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-11-01 13:36:42.609480 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-11-01 13:36:42.609491 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-11-01 13:36:42.609502 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-11-01 13:36:42.609512 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-11-01 13:36:42.609523 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-11-01 13:36:42.609534 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.85s 2025-11-01 13:36:42.609544 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.20s 2025-11-01 13:36:42.609555 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.20s 2025-11-01 13:36:42.609566 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2025-11-01 13:36:42.989315 | orchestrator | + osism apply squid 2025-11-01 13:36:55.157381 | orchestrator | 2025-11-01 13:36:55 | INFO  | Task 25b935b8-ce0f-4908-af83-ed97044af59f (squid) was prepared for execution. 2025-11-01 13:36:55.157485 | orchestrator | 2025-11-01 13:36:55 | INFO  | It takes a moment until task 25b935b8-ce0f-4908-af83-ed97044af59f (squid) has been started and output is visible here. 2025-11-01 13:38:51.672123 | orchestrator | 2025-11-01 13:38:51.672230 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-11-01 13:38:51.672242 | orchestrator | 2025-11-01 13:38:51.672250 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-11-01 13:38:51.672259 | orchestrator | Saturday 01 November 2025 13:36:59 +0000 (0:00:00.176) 0:00:00.176 ***** 2025-11-01 13:38:51.672282 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-11-01 13:38:51.672290 | orchestrator | 2025-11-01 13:38:51.672299 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-11-01 13:38:51.672306 | orchestrator | Saturday 01 November 2025 13:36:59 +0000 (0:00:00.103) 0:00:00.280 ***** 2025-11-01 13:38:51.672314 | orchestrator | ok: [testbed-manager] 2025-11-01 13:38:51.672321 | orchestrator | 2025-11-01 13:38:51.672349 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-11-01 13:38:51.672356 | orchestrator | Saturday 01 November 2025 13:37:01 +0000 (0:00:01.662) 0:00:01.943 ***** 2025-11-01 13:38:51.672374 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-11-01 13:38:51.672382 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-11-01 13:38:51.672390 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-11-01 13:38:51.672397 | orchestrator | 2025-11-01 13:38:51.672404 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-11-01 13:38:51.672411 | orchestrator | Saturday 01 November 2025 13:37:02 +0000 (0:00:01.257) 0:00:03.200 ***** 2025-11-01 13:38:51.672418 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-11-01 13:38:51.672426 | orchestrator | 2025-11-01 13:38:51.672433 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-11-01 13:38:51.672440 | orchestrator | Saturday 01 November 2025 13:37:03 +0000 (0:00:01.130) 0:00:04.331 ***** 2025-11-01 13:38:51.672447 | orchestrator | ok: [testbed-manager] 2025-11-01 13:38:51.672454 | orchestrator | 2025-11-01 13:38:51.672461 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-11-01 13:38:51.672469 | orchestrator | Saturday 01 November 2025 13:37:04 +0000 (0:00:00.388) 0:00:04.719 ***** 2025-11-01 13:38:51.672476 | orchestrator | changed: [testbed-manager] 2025-11-01 13:38:51.672483 | orchestrator | 2025-11-01 13:38:51.672490 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-11-01 13:38:51.672497 | orchestrator | Saturday 01 November 2025 13:37:05 +0000 (0:00:00.990) 0:00:05.710 ***** 2025-11-01 13:38:51.672505 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-11-01 13:38:51.672512 | orchestrator | ok: [testbed-manager] 2025-11-01 13:38:51.672519 | orchestrator | 2025-11-01 13:38:51.672527 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-11-01 13:38:51.672534 | orchestrator | Saturday 01 November 2025 13:37:38 +0000 (0:00:33.099) 0:00:38.809 ***** 2025-11-01 13:38:51.672541 | orchestrator | changed: [testbed-manager] 2025-11-01 13:38:51.672548 | orchestrator | 2025-11-01 13:38:51.672555 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-11-01 13:38:51.672562 | orchestrator | Saturday 01 November 2025 13:37:50 +0000 (0:00:12.276) 0:00:51.086 ***** 2025-11-01 13:38:51.672569 | orchestrator | Pausing for 60 seconds 2025-11-01 13:38:51.672577 | orchestrator | changed: [testbed-manager] 2025-11-01 13:38:51.672584 | orchestrator | 2025-11-01 13:38:51.672592 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-11-01 13:38:51.672599 | orchestrator | Saturday 01 November 2025 13:38:50 +0000 (0:01:00.089) 0:01:51.176 ***** 2025-11-01 13:38:51.672606 | orchestrator | ok: [testbed-manager] 2025-11-01 13:38:51.672613 | orchestrator | 2025-11-01 13:38:51.672620 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-11-01 13:38:51.672628 | orchestrator | Saturday 01 November 2025 13:38:50 +0000 (0:00:00.068) 0:01:51.244 ***** 2025-11-01 13:38:51.672635 | orchestrator | changed: [testbed-manager] 2025-11-01 13:38:51.672642 | orchestrator | 2025-11-01 13:38:51.672651 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:38:51.672659 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:38:51.672674 | orchestrator | 2025-11-01 13:38:51.672682 | orchestrator | 2025-11-01 13:38:51.672691 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:38:51.672699 | orchestrator | Saturday 01 November 2025 13:38:51 +0000 (0:00:00.755) 0:01:52.000 ***** 2025-11-01 13:38:51.672707 | orchestrator | =============================================================================== 2025-11-01 13:38:51.672714 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2025-11-01 13:38:51.672722 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 33.10s 2025-11-01 13:38:51.672730 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.28s 2025-11-01 13:38:51.672738 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.66s 2025-11-01 13:38:51.672746 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.26s 2025-11-01 13:38:51.672754 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.13s 2025-11-01 13:38:51.672762 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.99s 2025-11-01 13:38:51.672770 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.76s 2025-11-01 13:38:51.672778 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.39s 2025-11-01 13:38:51.672786 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-11-01 13:38:51.672794 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-11-01 13:38:52.043034 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-11-01 13:38:52.043597 | orchestrator | ++ semver latest 9.0.0 2025-11-01 13:38:52.096497 | orchestrator | + [[ -1 -lt 0 ]] 2025-11-01 13:38:52.096528 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-11-01 13:38:52.097200 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-11-01 13:39:04.414743 | orchestrator | 2025-11-01 13:39:04 | INFO  | Task f359570c-5c80-4402-9d64-926376d22f32 (operator) was prepared for execution. 2025-11-01 13:39:04.414843 | orchestrator | 2025-11-01 13:39:04 | INFO  | It takes a moment until task f359570c-5c80-4402-9d64-926376d22f32 (operator) has been started and output is visible here. 2025-11-01 13:39:22.278203 | orchestrator | 2025-11-01 13:39:22.278306 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-11-01 13:39:22.278322 | orchestrator | 2025-11-01 13:39:22.278379 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-01 13:39:22.278391 | orchestrator | Saturday 01 November 2025 13:39:09 +0000 (0:00:00.180) 0:00:00.180 ***** 2025-11-01 13:39:22.278402 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:39:22.278414 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:39:22.278425 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:39:22.278436 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:39:22.278447 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:39:22.278457 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:39:22.278468 | orchestrator | 2025-11-01 13:39:22.278479 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-11-01 13:39:22.278490 | orchestrator | Saturday 01 November 2025 13:39:12 +0000 (0:00:03.510) 0:00:03.690 ***** 2025-11-01 13:39:22.278501 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:39:22.278513 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:39:22.278524 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:39:22.278535 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:39:22.278545 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:39:22.278556 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:39:22.278572 | orchestrator | 2025-11-01 13:39:22.278583 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-11-01 13:39:22.278594 | orchestrator | 2025-11-01 13:39:22.278605 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-11-01 13:39:22.278616 | orchestrator | Saturday 01 November 2025 13:39:13 +0000 (0:00:00.920) 0:00:04.611 ***** 2025-11-01 13:39:22.278627 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:39:22.278655 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:39:22.278666 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:39:22.278677 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:39:22.278687 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:39:22.278698 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:39:22.278709 | orchestrator | 2025-11-01 13:39:22.278719 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-11-01 13:39:22.278730 | orchestrator | Saturday 01 November 2025 13:39:13 +0000 (0:00:00.206) 0:00:04.817 ***** 2025-11-01 13:39:22.278741 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:39:22.278754 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:39:22.278766 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:39:22.278777 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:39:22.278789 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:39:22.278800 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:39:22.278812 | orchestrator | 2025-11-01 13:39:22.278836 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-11-01 13:39:22.278849 | orchestrator | Saturday 01 November 2025 13:39:13 +0000 (0:00:00.210) 0:00:05.028 ***** 2025-11-01 13:39:22.278861 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:39:22.278874 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:39:22.278886 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:39:22.278899 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:39:22.278911 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:39:22.278923 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:39:22.278934 | orchestrator | 2025-11-01 13:39:22.278950 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-11-01 13:39:22.278963 | orchestrator | Saturday 01 November 2025 13:39:14 +0000 (0:00:00.725) 0:00:05.753 ***** 2025-11-01 13:39:22.278974 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:39:22.278986 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:39:22.278998 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:39:22.279010 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:39:22.279021 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:39:22.279033 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:39:22.279046 | orchestrator | 2025-11-01 13:39:22.279058 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-11-01 13:39:22.279069 | orchestrator | Saturday 01 November 2025 13:39:15 +0000 (0:00:00.912) 0:00:06.665 ***** 2025-11-01 13:39:22.279082 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-11-01 13:39:22.279094 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-11-01 13:39:22.279105 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-11-01 13:39:22.279116 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-11-01 13:39:22.279126 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-11-01 13:39:22.279137 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-11-01 13:39:22.279148 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-11-01 13:39:22.279158 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-11-01 13:39:22.279169 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-11-01 13:39:22.279180 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-11-01 13:39:22.279191 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-11-01 13:39:22.279201 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-11-01 13:39:22.279212 | orchestrator | 2025-11-01 13:39:22.279223 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-11-01 13:39:22.279233 | orchestrator | Saturday 01 November 2025 13:39:17 +0000 (0:00:01.495) 0:00:08.161 ***** 2025-11-01 13:39:22.279244 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:39:22.279254 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:39:22.279265 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:39:22.279275 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:39:22.279286 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:39:22.279296 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:39:22.279314 | orchestrator | 2025-11-01 13:39:22.279340 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-11-01 13:39:22.279352 | orchestrator | Saturday 01 November 2025 13:39:18 +0000 (0:00:01.330) 0:00:09.491 ***** 2025-11-01 13:39:22.279362 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-11-01 13:39:22.279373 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-11-01 13:39:22.279384 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-11-01 13:39:22.279395 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-11-01 13:39:22.279421 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-11-01 13:39:22.279433 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-11-01 13:39:22.279444 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-11-01 13:39:22.279455 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-11-01 13:39:22.279465 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-11-01 13:39:22.279476 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-11-01 13:39:22.279487 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-11-01 13:39:22.279497 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-11-01 13:39:22.279508 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-11-01 13:39:22.279518 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-11-01 13:39:22.279529 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-11-01 13:39:22.279540 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-11-01 13:39:22.279551 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-11-01 13:39:22.279561 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-11-01 13:39:22.279572 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-11-01 13:39:22.279583 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-11-01 13:39:22.279593 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-11-01 13:39:22.279604 | orchestrator | 2025-11-01 13:39:22.279615 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-11-01 13:39:22.279626 | orchestrator | Saturday 01 November 2025 13:39:19 +0000 (0:00:01.492) 0:00:10.983 ***** 2025-11-01 13:39:22.279637 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:39:22.279648 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:39:22.279658 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:39:22.279669 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:39:22.279680 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:39:22.279690 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:39:22.279701 | orchestrator | 2025-11-01 13:39:22.279712 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-11-01 13:39:22.279723 | orchestrator | Saturday 01 November 2025 13:39:20 +0000 (0:00:00.189) 0:00:11.173 ***** 2025-11-01 13:39:22.279733 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:39:22.279744 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:39:22.279755 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:39:22.279765 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:39:22.279776 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:39:22.279787 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:39:22.279797 | orchestrator | 2025-11-01 13:39:22.279808 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-11-01 13:39:22.279819 | orchestrator | Saturday 01 November 2025 13:39:20 +0000 (0:00:00.589) 0:00:11.762 ***** 2025-11-01 13:39:22.279830 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:39:22.279841 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:39:22.279857 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:39:22.279868 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:39:22.279879 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:39:22.279889 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:39:22.279900 | orchestrator | 2025-11-01 13:39:22.279911 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-11-01 13:39:22.279921 | orchestrator | Saturday 01 November 2025 13:39:20 +0000 (0:00:00.239) 0:00:12.002 ***** 2025-11-01 13:39:22.279932 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-11-01 13:39:22.279943 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:39:22.279953 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-01 13:39:22.279964 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:39:22.279975 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-01 13:39:22.279985 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-01 13:39:22.279996 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:39:22.280006 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:39:22.280017 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-11-01 13:39:22.280028 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:39:22.280038 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-01 13:39:22.280049 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:39:22.280059 | orchestrator | 2025-11-01 13:39:22.280070 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-11-01 13:39:22.280081 | orchestrator | Saturday 01 November 2025 13:39:21 +0000 (0:00:00.819) 0:00:12.822 ***** 2025-11-01 13:39:22.280092 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:39:22.280102 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:39:22.280113 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:39:22.280124 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:39:22.280134 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:39:22.280145 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:39:22.280156 | orchestrator | 2025-11-01 13:39:22.280166 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-11-01 13:39:22.280177 | orchestrator | Saturday 01 November 2025 13:39:21 +0000 (0:00:00.176) 0:00:12.998 ***** 2025-11-01 13:39:22.280188 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:39:22.280198 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:39:22.280209 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:39:22.280220 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:39:22.280230 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:39:22.280241 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:39:22.280252 | orchestrator | 2025-11-01 13:39:22.280263 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-11-01 13:39:22.280273 | orchestrator | Saturday 01 November 2025 13:39:22 +0000 (0:00:00.181) 0:00:13.180 ***** 2025-11-01 13:39:22.280284 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:39:22.280295 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:39:22.280305 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:39:22.280316 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:39:22.280381 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:39:23.572693 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:39:23.572830 | orchestrator | 2025-11-01 13:39:23.572849 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-11-01 13:39:23.572862 | orchestrator | Saturday 01 November 2025 13:39:22 +0000 (0:00:00.195) 0:00:13.375 ***** 2025-11-01 13:39:23.572873 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:39:23.572884 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:39:23.572895 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:39:23.572955 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:39:23.572969 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:39:23.572981 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:39:23.572992 | orchestrator | 2025-11-01 13:39:23.573004 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-11-01 13:39:23.573039 | orchestrator | Saturday 01 November 2025 13:39:22 +0000 (0:00:00.713) 0:00:14.089 ***** 2025-11-01 13:39:23.573050 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:39:23.573061 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:39:23.573071 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:39:23.573082 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:39:23.573093 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:39:23.573103 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:39:23.573114 | orchestrator | 2025-11-01 13:39:23.573125 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:39:23.573137 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 13:39:23.573149 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 13:39:23.573160 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 13:39:23.573171 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 13:39:23.573198 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 13:39:23.573209 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 13:39:23.573220 | orchestrator | 2025-11-01 13:39:23.573231 | orchestrator | 2025-11-01 13:39:23.573242 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:39:23.573258 | orchestrator | Saturday 01 November 2025 13:39:23 +0000 (0:00:00.289) 0:00:14.378 ***** 2025-11-01 13:39:23.573269 | orchestrator | =============================================================================== 2025-11-01 13:39:23.573280 | orchestrator | Gathering Facts --------------------------------------------------------- 3.51s 2025-11-01 13:39:23.573291 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.50s 2025-11-01 13:39:23.573301 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.49s 2025-11-01 13:39:23.573313 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.33s 2025-11-01 13:39:23.573346 | orchestrator | Do not require tty for all users ---------------------------------------- 0.92s 2025-11-01 13:39:23.573358 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.91s 2025-11-01 13:39:23.573369 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.82s 2025-11-01 13:39:23.573380 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.73s 2025-11-01 13:39:23.573390 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.71s 2025-11-01 13:39:23.573401 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.59s 2025-11-01 13:39:23.573412 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.29s 2025-11-01 13:39:23.573422 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.24s 2025-11-01 13:39:23.573433 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.21s 2025-11-01 13:39:23.573444 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.21s 2025-11-01 13:39:23.573454 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.20s 2025-11-01 13:39:23.573465 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.19s 2025-11-01 13:39:23.573476 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.18s 2025-11-01 13:39:23.573494 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.18s 2025-11-01 13:39:23.933235 | orchestrator | + osism apply --environment custom facts 2025-11-01 13:39:26.267489 | orchestrator | 2025-11-01 13:39:26 | INFO  | Trying to run play facts in environment custom 2025-11-01 13:39:36.425564 | orchestrator | 2025-11-01 13:39:36 | INFO  | Task cbd93772-9a81-45c3-b6d1-62300456bff6 (facts) was prepared for execution. 2025-11-01 13:39:36.425666 | orchestrator | 2025-11-01 13:39:36 | INFO  | It takes a moment until task cbd93772-9a81-45c3-b6d1-62300456bff6 (facts) has been started and output is visible here. 2025-11-01 13:40:31.422739 | orchestrator | 2025-11-01 13:40:31.422827 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-11-01 13:40:31.422842 | orchestrator | 2025-11-01 13:40:31.422854 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-11-01 13:40:31.422865 | orchestrator | Saturday 01 November 2025 13:39:41 +0000 (0:00:00.119) 0:00:00.119 ***** 2025-11-01 13:40:31.422876 | orchestrator | ok: [testbed-manager] 2025-11-01 13:40:31.422911 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:40:31.422924 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:40:31.422935 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:40:31.422946 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:40:31.422956 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:40:31.422967 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:40:31.422978 | orchestrator | 2025-11-01 13:40:31.422988 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-11-01 13:40:31.422999 | orchestrator | Saturday 01 November 2025 13:39:42 +0000 (0:00:01.414) 0:00:01.534 ***** 2025-11-01 13:40:31.423010 | orchestrator | ok: [testbed-manager] 2025-11-01 13:40:31.423021 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:40:31.423031 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:40:31.423042 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:40:31.423052 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:40:31.423063 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:40:31.423073 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:40:31.423084 | orchestrator | 2025-11-01 13:40:31.423095 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-11-01 13:40:31.423105 | orchestrator | 2025-11-01 13:40:31.423116 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-11-01 13:40:31.423127 | orchestrator | Saturday 01 November 2025 13:39:44 +0000 (0:00:01.340) 0:00:02.874 ***** 2025-11-01 13:40:31.423137 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:40:31.423148 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:40:31.423159 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:40:31.423169 | orchestrator | 2025-11-01 13:40:31.423180 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-11-01 13:40:31.423192 | orchestrator | Saturday 01 November 2025 13:39:44 +0000 (0:00:00.125) 0:00:03.000 ***** 2025-11-01 13:40:31.423202 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:40:31.423213 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:40:31.423223 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:40:31.423234 | orchestrator | 2025-11-01 13:40:31.423245 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-11-01 13:40:31.423256 | orchestrator | Saturday 01 November 2025 13:39:44 +0000 (0:00:00.274) 0:00:03.275 ***** 2025-11-01 13:40:31.423267 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:40:31.423277 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:40:31.423288 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:40:31.423299 | orchestrator | 2025-11-01 13:40:31.423312 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-11-01 13:40:31.423324 | orchestrator | Saturday 01 November 2025 13:39:44 +0000 (0:00:00.231) 0:00:03.507 ***** 2025-11-01 13:40:31.423376 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:40:31.423412 | orchestrator | 2025-11-01 13:40:31.423425 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-11-01 13:40:31.423438 | orchestrator | Saturday 01 November 2025 13:39:44 +0000 (0:00:00.149) 0:00:03.656 ***** 2025-11-01 13:40:31.423450 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:40:31.423462 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:40:31.423474 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:40:31.423486 | orchestrator | 2025-11-01 13:40:31.423498 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-11-01 13:40:31.423510 | orchestrator | Saturday 01 November 2025 13:39:45 +0000 (0:00:00.555) 0:00:04.211 ***** 2025-11-01 13:40:31.423522 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:40:31.423534 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:40:31.423547 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:40:31.423558 | orchestrator | 2025-11-01 13:40:31.423571 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-11-01 13:40:31.423583 | orchestrator | Saturday 01 November 2025 13:39:45 +0000 (0:00:00.154) 0:00:04.366 ***** 2025-11-01 13:40:31.423595 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:40:31.423608 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:40:31.423620 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:40:31.423633 | orchestrator | 2025-11-01 13:40:31.423645 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-11-01 13:40:31.423657 | orchestrator | Saturday 01 November 2025 13:39:46 +0000 (0:00:01.147) 0:00:05.513 ***** 2025-11-01 13:40:31.423668 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:40:31.423678 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:40:31.423689 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:40:31.423699 | orchestrator | 2025-11-01 13:40:31.423710 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-11-01 13:40:31.423721 | orchestrator | Saturday 01 November 2025 13:39:47 +0000 (0:00:00.513) 0:00:06.027 ***** 2025-11-01 13:40:31.423731 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:40:31.423742 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:40:31.423752 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:40:31.423763 | orchestrator | 2025-11-01 13:40:31.423773 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-11-01 13:40:31.423784 | orchestrator | Saturday 01 November 2025 13:39:48 +0000 (0:00:01.189) 0:00:07.217 ***** 2025-11-01 13:40:31.423795 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:40:31.423805 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:40:31.423816 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:40:31.423826 | orchestrator | 2025-11-01 13:40:31.423837 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-11-01 13:40:31.423848 | orchestrator | Saturday 01 November 2025 13:40:10 +0000 (0:00:22.357) 0:00:29.575 ***** 2025-11-01 13:40:31.423858 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:40:31.423869 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:40:31.423879 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:40:31.423890 | orchestrator | 2025-11-01 13:40:31.423900 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-11-01 13:40:31.423925 | orchestrator | Saturday 01 November 2025 13:40:10 +0000 (0:00:00.103) 0:00:29.678 ***** 2025-11-01 13:40:31.423936 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:40:31.423947 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:40:31.423957 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:40:31.423968 | orchestrator | 2025-11-01 13:40:31.423979 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-11-01 13:40:31.423989 | orchestrator | Saturday 01 November 2025 13:40:20 +0000 (0:00:10.048) 0:00:39.727 ***** 2025-11-01 13:40:31.424000 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:40:31.424010 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:40:31.424021 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:40:31.424032 | orchestrator | 2025-11-01 13:40:31.424050 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-11-01 13:40:31.424061 | orchestrator | Saturday 01 November 2025 13:40:21 +0000 (0:00:00.491) 0:00:40.218 ***** 2025-11-01 13:40:31.424071 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-11-01 13:40:31.424082 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-11-01 13:40:31.424093 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-11-01 13:40:31.424103 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-11-01 13:40:31.424113 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-11-01 13:40:31.424124 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-11-01 13:40:31.424134 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-11-01 13:40:31.424145 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-11-01 13:40:31.424155 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-11-01 13:40:31.424166 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-11-01 13:40:31.424176 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-11-01 13:40:31.424187 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-11-01 13:40:31.424197 | orchestrator | 2025-11-01 13:40:31.424208 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-11-01 13:40:31.424219 | orchestrator | Saturday 01 November 2025 13:40:25 +0000 (0:00:03.788) 0:00:44.006 ***** 2025-11-01 13:40:31.424229 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:40:31.424240 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:40:31.424250 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:40:31.424261 | orchestrator | 2025-11-01 13:40:31.424272 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-11-01 13:40:31.424282 | orchestrator | 2025-11-01 13:40:31.424293 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-01 13:40:31.424304 | orchestrator | Saturday 01 November 2025 13:40:26 +0000 (0:00:01.787) 0:00:45.794 ***** 2025-11-01 13:40:31.424314 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:40:31.424325 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:40:31.424354 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:40:31.424365 | orchestrator | ok: [testbed-manager] 2025-11-01 13:40:31.424375 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:40:31.424386 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:40:31.424396 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:40:31.424407 | orchestrator | 2025-11-01 13:40:31.424418 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:40:31.424430 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:40:31.424441 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:40:31.424453 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:40:31.424500 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:40:31.424512 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:40:31.424524 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:40:31.424534 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:40:31.424552 | orchestrator | 2025-11-01 13:40:31.424563 | orchestrator | 2025-11-01 13:40:31.424574 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:40:31.424585 | orchestrator | Saturday 01 November 2025 13:40:31 +0000 (0:00:04.448) 0:00:50.243 ***** 2025-11-01 13:40:31.424595 | orchestrator | =============================================================================== 2025-11-01 13:40:31.424606 | orchestrator | osism.commons.repository : Update package cache ------------------------ 22.36s 2025-11-01 13:40:31.424616 | orchestrator | Install required packages (Debian) ------------------------------------- 10.05s 2025-11-01 13:40:31.424627 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.45s 2025-11-01 13:40:31.424638 | orchestrator | Copy fact files --------------------------------------------------------- 3.79s 2025-11-01 13:40:31.424648 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.79s 2025-11-01 13:40:31.424659 | orchestrator | Create custom facts directory ------------------------------------------- 1.41s 2025-11-01 13:40:31.424676 | orchestrator | Copy fact file ---------------------------------------------------------- 1.34s 2025-11-01 13:40:31.695772 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.19s 2025-11-01 13:40:31.695809 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.15s 2025-11-01 13:40:31.695820 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.56s 2025-11-01 13:40:31.695831 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.51s 2025-11-01 13:40:31.695843 | orchestrator | Create custom facts directory ------------------------------------------- 0.49s 2025-11-01 13:40:31.695854 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.27s 2025-11-01 13:40:31.695865 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.23s 2025-11-01 13:40:31.695877 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.15s 2025-11-01 13:40:31.695888 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2025-11-01 13:40:31.695899 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.13s 2025-11-01 13:40:31.695910 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2025-11-01 13:40:32.081764 | orchestrator | + osism apply bootstrap 2025-11-01 13:40:44.424244 | orchestrator | 2025-11-01 13:40:44 | INFO  | Task 4b0f7956-f687-4e02-bae3-0bb439cd5511 (bootstrap) was prepared for execution. 2025-11-01 13:40:44.424377 | orchestrator | 2025-11-01 13:40:44 | INFO  | It takes a moment until task 4b0f7956-f687-4e02-bae3-0bb439cd5511 (bootstrap) has been started and output is visible here. 2025-11-01 13:41:02.303962 | orchestrator | 2025-11-01 13:41:02.304075 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-11-01 13:41:02.304093 | orchestrator | 2025-11-01 13:41:02.304105 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-11-01 13:41:02.304116 | orchestrator | Saturday 01 November 2025 13:40:49 +0000 (0:00:00.158) 0:00:00.158 ***** 2025-11-01 13:41:02.304128 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:41:02.304139 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:41:02.304150 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:41:02.304161 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:02.304172 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:41:02.304182 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:41:02.304193 | orchestrator | ok: [testbed-manager] 2025-11-01 13:41:02.304203 | orchestrator | 2025-11-01 13:41:02.304214 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-11-01 13:41:02.304225 | orchestrator | 2025-11-01 13:41:02.304235 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-01 13:41:02.304246 | orchestrator | Saturday 01 November 2025 13:40:49 +0000 (0:00:00.324) 0:00:00.482 ***** 2025-11-01 13:41:02.304271 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:41:02.304282 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:41:02.304406 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:41:02.304421 | orchestrator | ok: [testbed-manager] 2025-11-01 13:41:02.304432 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:41:02.304443 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:41:02.304453 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:02.304464 | orchestrator | 2025-11-01 13:41:02.304474 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-11-01 13:41:02.304485 | orchestrator | 2025-11-01 13:41:02.304496 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-01 13:41:02.304507 | orchestrator | Saturday 01 November 2025 13:40:53 +0000 (0:00:03.869) 0:00:04.351 ***** 2025-11-01 13:41:02.304520 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-11-01 13:41:02.304533 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-11-01 13:41:02.304545 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-11-01 13:41:02.304557 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-11-01 13:41:02.304570 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-11-01 13:41:02.304581 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-11-01 13:41:02.304594 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-11-01 13:41:02.304606 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-11-01 13:41:02.304619 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-11-01 13:41:02.304630 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-11-01 13:41:02.304643 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-11-01 13:41:02.304656 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-11-01 13:41:02.304668 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-11-01 13:41:02.304680 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-11-01 13:41:02.304692 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-11-01 13:41:02.304704 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-11-01 13:41:02.304716 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-11-01 13:41:02.304728 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:41:02.304740 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-11-01 13:41:02.304752 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-11-01 13:41:02.304764 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-11-01 13:41:02.304776 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-11-01 13:41:02.304788 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 13:41:02.304802 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-11-01 13:41:02.304814 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-11-01 13:41:02.304824 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-11-01 13:41:02.304835 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-11-01 13:41:02.304845 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:41:02.304856 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 13:41:02.304866 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-11-01 13:41:02.304877 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-11-01 13:41:02.304887 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-11-01 13:41:02.304898 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 13:41:02.304908 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-11-01 13:41:02.304919 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-11-01 13:41:02.304929 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-11-01 13:41:02.304940 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-11-01 13:41:02.304958 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:41:02.304969 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-11-01 13:41:02.304980 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-11-01 13:41:02.304990 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-11-01 13:41:02.305001 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-11-01 13:41:02.305011 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:41:02.305022 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-11-01 13:41:02.305033 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-11-01 13:41:02.305044 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-11-01 13:41:02.305054 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:41:02.305083 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-11-01 13:41:02.305095 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-11-01 13:41:02.305106 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-11-01 13:41:02.305117 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-11-01 13:41:02.305127 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:41:02.305138 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-11-01 13:41:02.305149 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-11-01 13:41:02.305159 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-11-01 13:41:02.305170 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:41:02.305181 | orchestrator | 2025-11-01 13:41:02.305192 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-11-01 13:41:02.305202 | orchestrator | 2025-11-01 13:41:02.305213 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-11-01 13:41:02.305224 | orchestrator | Saturday 01 November 2025 13:40:54 +0000 (0:00:00.530) 0:00:04.881 ***** 2025-11-01 13:41:02.305234 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:41:02.305245 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:41:02.305256 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:41:02.305267 | orchestrator | ok: [testbed-manager] 2025-11-01 13:41:02.305278 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:41:02.305288 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:02.305299 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:41:02.305310 | orchestrator | 2025-11-01 13:41:02.305321 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-11-01 13:41:02.305351 | orchestrator | Saturday 01 November 2025 13:40:55 +0000 (0:00:01.387) 0:00:06.269 ***** 2025-11-01 13:41:02.305363 | orchestrator | ok: [testbed-manager] 2025-11-01 13:41:02.305374 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:41:02.305385 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:41:02.305395 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:41:02.305406 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:41:02.305416 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:02.305427 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:41:02.305437 | orchestrator | 2025-11-01 13:41:02.305448 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-11-01 13:41:02.305459 | orchestrator | Saturday 01 November 2025 13:40:56 +0000 (0:00:01.399) 0:00:07.668 ***** 2025-11-01 13:41:02.305470 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 13:41:02.305483 | orchestrator | 2025-11-01 13:41:02.305494 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-11-01 13:41:02.305505 | orchestrator | Saturday 01 November 2025 13:40:57 +0000 (0:00:00.347) 0:00:08.016 ***** 2025-11-01 13:41:02.305516 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:41:02.305526 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:41:02.305537 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:41:02.305555 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:41:02.305565 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:41:02.305576 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:41:02.305586 | orchestrator | changed: [testbed-manager] 2025-11-01 13:41:02.305597 | orchestrator | 2025-11-01 13:41:02.305608 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-11-01 13:41:02.305619 | orchestrator | Saturday 01 November 2025 13:40:59 +0000 (0:00:02.285) 0:00:10.302 ***** 2025-11-01 13:41:02.305629 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:41:02.305641 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:41:02.305671 | orchestrator | 2025-11-01 13:41:02.305682 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-11-01 13:41:02.305693 | orchestrator | Saturday 01 November 2025 13:40:59 +0000 (0:00:00.317) 0:00:10.620 ***** 2025-11-01 13:41:02.305704 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:41:02.305715 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:41:02.305725 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:41:02.305736 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:41:02.305747 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:41:02.305757 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:41:02.305768 | orchestrator | 2025-11-01 13:41:02.305779 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-11-01 13:41:02.305790 | orchestrator | Saturday 01 November 2025 13:41:00 +0000 (0:00:01.056) 0:00:11.677 ***** 2025-11-01 13:41:02.305800 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:41:02.305811 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:41:02.305821 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:41:02.305832 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:41:02.305843 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:41:02.305853 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:41:02.305864 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:41:02.305874 | orchestrator | 2025-11-01 13:41:02.305885 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-11-01 13:41:02.305896 | orchestrator | Saturday 01 November 2025 13:41:01 +0000 (0:00:00.634) 0:00:12.311 ***** 2025-11-01 13:41:02.305906 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:41:02.305917 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:41:02.305928 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:41:02.305947 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:41:02.305958 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:41:02.305969 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:41:02.305980 | orchestrator | ok: [testbed-manager] 2025-11-01 13:41:02.305991 | orchestrator | 2025-11-01 13:41:02.306001 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-11-01 13:41:02.306013 | orchestrator | Saturday 01 November 2025 13:41:02 +0000 (0:00:00.668) 0:00:12.980 ***** 2025-11-01 13:41:02.306086 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:41:02.306097 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:41:02.306117 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:41:15.903789 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:41:15.903904 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:41:15.903919 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:41:15.903931 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:41:15.903942 | orchestrator | 2025-11-01 13:41:15.903956 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-11-01 13:41:15.903968 | orchestrator | Saturday 01 November 2025 13:41:02 +0000 (0:00:00.262) 0:00:13.242 ***** 2025-11-01 13:41:15.903981 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 13:41:15.904031 | orchestrator | 2025-11-01 13:41:15.904043 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-11-01 13:41:15.904063 | orchestrator | Saturday 01 November 2025 13:41:02 +0000 (0:00:00.342) 0:00:13.585 ***** 2025-11-01 13:41:15.904075 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 13:41:15.904086 | orchestrator | 2025-11-01 13:41:15.904097 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-11-01 13:41:15.904109 | orchestrator | Saturday 01 November 2025 13:41:03 +0000 (0:00:00.404) 0:00:13.990 ***** 2025-11-01 13:41:15.904120 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:41:15.904132 | orchestrator | ok: [testbed-manager] 2025-11-01 13:41:15.904143 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:41:15.904154 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:41:15.904164 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:41:15.904175 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:41:15.904186 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:15.904197 | orchestrator | 2025-11-01 13:41:15.904207 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-11-01 13:41:15.904218 | orchestrator | Saturday 01 November 2025 13:41:04 +0000 (0:00:01.613) 0:00:15.604 ***** 2025-11-01 13:41:15.904229 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:41:15.904240 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:41:15.904251 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:41:15.904262 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:41:15.904272 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:41:15.904283 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:41:15.904293 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:41:15.904305 | orchestrator | 2025-11-01 13:41:15.904317 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-11-01 13:41:15.904329 | orchestrator | Saturday 01 November 2025 13:41:05 +0000 (0:00:00.276) 0:00:15.880 ***** 2025-11-01 13:41:15.904371 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:41:15.904384 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:41:15.904396 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:41:15.904409 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:41:15.904420 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:15.904432 | orchestrator | ok: [testbed-manager] 2025-11-01 13:41:15.904444 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:41:15.904457 | orchestrator | 2025-11-01 13:41:15.904469 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-11-01 13:41:15.904481 | orchestrator | Saturday 01 November 2025 13:41:05 +0000 (0:00:00.680) 0:00:16.561 ***** 2025-11-01 13:41:15.904494 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:41:15.904506 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:41:15.904518 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:41:15.904530 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:41:15.904542 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:41:15.904554 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:41:15.904566 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:41:15.904578 | orchestrator | 2025-11-01 13:41:15.904592 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-11-01 13:41:15.904606 | orchestrator | Saturday 01 November 2025 13:41:06 +0000 (0:00:00.281) 0:00:16.843 ***** 2025-11-01 13:41:15.904619 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:41:15.904631 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:41:15.904643 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:41:15.904655 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:41:15.904666 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:41:15.904677 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:41:15.904697 | orchestrator | ok: [testbed-manager] 2025-11-01 13:41:15.904708 | orchestrator | 2025-11-01 13:41:15.904719 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-11-01 13:41:15.904730 | orchestrator | Saturday 01 November 2025 13:41:06 +0000 (0:00:00.622) 0:00:17.466 ***** 2025-11-01 13:41:15.904741 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:41:15.904751 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:41:15.904762 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:41:15.904773 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:41:15.904784 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:41:15.904795 | orchestrator | ok: [testbed-manager] 2025-11-01 13:41:15.904805 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:41:15.904816 | orchestrator | 2025-11-01 13:41:15.904827 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-11-01 13:41:15.904838 | orchestrator | Saturday 01 November 2025 13:41:07 +0000 (0:00:01.150) 0:00:18.616 ***** 2025-11-01 13:41:15.904849 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:41:15.904860 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:41:15.904871 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:41:15.904882 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:41:15.904893 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:41:15.904904 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:15.904915 | orchestrator | ok: [testbed-manager] 2025-11-01 13:41:15.904926 | orchestrator | 2025-11-01 13:41:15.904937 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-11-01 13:41:15.904948 | orchestrator | Saturday 01 November 2025 13:41:08 +0000 (0:00:01.200) 0:00:19.816 ***** 2025-11-01 13:41:15.904977 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 13:41:15.904989 | orchestrator | 2025-11-01 13:41:15.905000 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-11-01 13:41:15.905012 | orchestrator | Saturday 01 November 2025 13:41:09 +0000 (0:00:00.358) 0:00:20.174 ***** 2025-11-01 13:41:15.905022 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:41:15.905033 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:41:15.905044 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:41:15.905055 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:41:15.905065 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:41:15.905076 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:41:15.905087 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:41:15.905098 | orchestrator | 2025-11-01 13:41:15.905109 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-11-01 13:41:15.905125 | orchestrator | Saturday 01 November 2025 13:41:10 +0000 (0:00:01.660) 0:00:21.835 ***** 2025-11-01 13:41:15.905137 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:41:15.905148 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:41:15.905159 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:41:15.905170 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:15.905180 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:41:15.905191 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:41:15.905202 | orchestrator | ok: [testbed-manager] 2025-11-01 13:41:15.905213 | orchestrator | 2025-11-01 13:41:15.905224 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-11-01 13:41:15.905235 | orchestrator | Saturday 01 November 2025 13:41:11 +0000 (0:00:00.292) 0:00:22.127 ***** 2025-11-01 13:41:15.905246 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:41:15.905257 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:41:15.905268 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:41:15.905278 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:15.905289 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:41:15.905300 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:41:15.905311 | orchestrator | ok: [testbed-manager] 2025-11-01 13:41:15.905322 | orchestrator | 2025-11-01 13:41:15.905349 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-11-01 13:41:15.905368 | orchestrator | Saturday 01 November 2025 13:41:11 +0000 (0:00:00.379) 0:00:22.507 ***** 2025-11-01 13:41:15.905379 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:41:15.905390 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:41:15.905401 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:41:15.905412 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:15.905422 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:41:15.905433 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:41:15.905444 | orchestrator | ok: [testbed-manager] 2025-11-01 13:41:15.905455 | orchestrator | 2025-11-01 13:41:15.905466 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-11-01 13:41:15.905477 | orchestrator | Saturday 01 November 2025 13:41:11 +0000 (0:00:00.267) 0:00:22.774 ***** 2025-11-01 13:41:15.905488 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 13:41:15.905501 | orchestrator | 2025-11-01 13:41:15.905512 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-11-01 13:41:15.905523 | orchestrator | Saturday 01 November 2025 13:41:12 +0000 (0:00:00.375) 0:00:23.150 ***** 2025-11-01 13:41:15.905534 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:41:15.905545 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:41:15.905555 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:41:15.905566 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:41:15.905577 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:41:15.905588 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:15.905599 | orchestrator | ok: [testbed-manager] 2025-11-01 13:41:15.905609 | orchestrator | 2025-11-01 13:41:15.905620 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-11-01 13:41:15.905631 | orchestrator | Saturday 01 November 2025 13:41:12 +0000 (0:00:00.601) 0:00:23.751 ***** 2025-11-01 13:41:15.905642 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:41:15.905653 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:41:15.905664 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:41:15.905674 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:41:15.905685 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:41:15.905696 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:41:15.905707 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:41:15.905717 | orchestrator | 2025-11-01 13:41:15.905728 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-11-01 13:41:15.905739 | orchestrator | Saturday 01 November 2025 13:41:13 +0000 (0:00:00.251) 0:00:24.003 ***** 2025-11-01 13:41:15.905750 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:41:15.905761 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:41:15.905772 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:15.905782 | orchestrator | ok: [testbed-manager] 2025-11-01 13:41:15.905793 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:41:15.905804 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:41:15.905815 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:41:15.905825 | orchestrator | 2025-11-01 13:41:15.905836 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-11-01 13:41:15.905847 | orchestrator | Saturday 01 November 2025 13:41:14 +0000 (0:00:01.069) 0:00:25.072 ***** 2025-11-01 13:41:15.905858 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:41:15.905869 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:41:15.905880 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:41:15.905891 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:15.905902 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:41:15.905913 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:41:15.905924 | orchestrator | ok: [testbed-manager] 2025-11-01 13:41:15.905935 | orchestrator | 2025-11-01 13:41:15.905946 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-11-01 13:41:15.905956 | orchestrator | Saturday 01 November 2025 13:41:14 +0000 (0:00:00.611) 0:00:25.683 ***** 2025-11-01 13:41:15.905974 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:41:15.905985 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:41:15.905996 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:15.906006 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:41:15.906089 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:42:06.605088 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:42:06.605193 | orchestrator | ok: [testbed-manager] 2025-11-01 13:42:06.605209 | orchestrator | 2025-11-01 13:42:06.605223 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-11-01 13:42:06.605237 | orchestrator | Saturday 01 November 2025 13:41:15 +0000 (0:00:01.047) 0:00:26.731 ***** 2025-11-01 13:42:06.605249 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:42:06.605260 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:42:06.605271 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:42:06.605282 | orchestrator | changed: [testbed-manager] 2025-11-01 13:42:06.605293 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:42:06.605304 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:42:06.605315 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:42:06.605326 | orchestrator | 2025-11-01 13:42:06.605372 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-11-01 13:42:06.605384 | orchestrator | Saturday 01 November 2025 13:41:38 +0000 (0:00:22.369) 0:00:49.100 ***** 2025-11-01 13:42:06.605395 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:42:06.605406 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:42:06.605417 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:42:06.605428 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:42:06.605439 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:42:06.605449 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:42:06.605460 | orchestrator | ok: [testbed-manager] 2025-11-01 13:42:06.605471 | orchestrator | 2025-11-01 13:42:06.605482 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-11-01 13:42:06.605492 | orchestrator | Saturday 01 November 2025 13:41:38 +0000 (0:00:00.276) 0:00:49.376 ***** 2025-11-01 13:42:06.605503 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:42:06.605514 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:42:06.605524 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:42:06.605535 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:42:06.605546 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:42:06.605556 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:42:06.605567 | orchestrator | ok: [testbed-manager] 2025-11-01 13:42:06.605578 | orchestrator | 2025-11-01 13:42:06.605589 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-11-01 13:42:06.605600 | orchestrator | Saturday 01 November 2025 13:41:38 +0000 (0:00:00.273) 0:00:49.650 ***** 2025-11-01 13:42:06.605610 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:42:06.605624 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:42:06.605637 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:42:06.605649 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:42:06.605661 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:42:06.605673 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:42:06.605686 | orchestrator | ok: [testbed-manager] 2025-11-01 13:42:06.605698 | orchestrator | 2025-11-01 13:42:06.605710 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-11-01 13:42:06.605723 | orchestrator | Saturday 01 November 2025 13:41:39 +0000 (0:00:00.268) 0:00:49.918 ***** 2025-11-01 13:42:06.605738 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 13:42:06.605753 | orchestrator | 2025-11-01 13:42:06.605765 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-11-01 13:42:06.605778 | orchestrator | Saturday 01 November 2025 13:41:39 +0000 (0:00:00.356) 0:00:50.275 ***** 2025-11-01 13:42:06.605790 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:42:06.605827 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:42:06.605839 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:42:06.605851 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:42:06.605863 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:42:06.605876 | orchestrator | ok: [testbed-manager] 2025-11-01 13:42:06.605888 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:42:06.605900 | orchestrator | 2025-11-01 13:42:06.605913 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-11-01 13:42:06.605925 | orchestrator | Saturday 01 November 2025 13:41:41 +0000 (0:00:02.298) 0:00:52.573 ***** 2025-11-01 13:42:06.605937 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:42:06.605949 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:42:06.605962 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:42:06.605974 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:42:06.605985 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:42:06.605996 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:42:06.606006 | orchestrator | changed: [testbed-manager] 2025-11-01 13:42:06.606063 | orchestrator | 2025-11-01 13:42:06.606077 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-11-01 13:42:06.606104 | orchestrator | Saturday 01 November 2025 13:41:42 +0000 (0:00:01.179) 0:00:53.752 ***** 2025-11-01 13:42:06.606115 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:42:06.606126 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:42:06.606137 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:42:06.606147 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:42:06.606158 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:42:06.606168 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:42:06.606179 | orchestrator | ok: [testbed-manager] 2025-11-01 13:42:06.606189 | orchestrator | 2025-11-01 13:42:06.606200 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-11-01 13:42:06.606211 | orchestrator | Saturday 01 November 2025 13:41:43 +0000 (0:00:00.891) 0:00:54.644 ***** 2025-11-01 13:42:06.606223 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 13:42:06.606235 | orchestrator | 2025-11-01 13:42:06.606246 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-11-01 13:42:06.606257 | orchestrator | Saturday 01 November 2025 13:41:44 +0000 (0:00:00.329) 0:00:54.973 ***** 2025-11-01 13:42:06.606268 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:42:06.606278 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:42:06.606289 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:42:06.606300 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:42:06.606310 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:42:06.606321 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:42:06.606331 | orchestrator | changed: [testbed-manager] 2025-11-01 13:42:06.606360 | orchestrator | 2025-11-01 13:42:06.606389 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-11-01 13:42:06.606400 | orchestrator | Saturday 01 November 2025 13:41:45 +0000 (0:00:01.154) 0:00:56.127 ***** 2025-11-01 13:42:06.606411 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:42:06.606422 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:42:06.606433 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:42:06.606443 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:42:06.606454 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:42:06.606464 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:42:06.606475 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:42:06.606486 | orchestrator | 2025-11-01 13:42:06.606497 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2025-11-01 13:42:06.606507 | orchestrator | Saturday 01 November 2025 13:41:45 +0000 (0:00:00.271) 0:00:56.399 ***** 2025-11-01 13:42:06.606524 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 13:42:06.606544 | orchestrator | 2025-11-01 13:42:06.606555 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2025-11-01 13:42:06.606566 | orchestrator | Saturday 01 November 2025 13:41:45 +0000 (0:00:00.355) 0:00:56.755 ***** 2025-11-01 13:42:06.606577 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:42:06.606587 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:42:06.606598 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:42:06.606609 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:42:06.606619 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:42:06.606630 | orchestrator | ok: [testbed-manager] 2025-11-01 13:42:06.606641 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:42:06.606651 | orchestrator | 2025-11-01 13:42:06.606662 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2025-11-01 13:42:06.606673 | orchestrator | Saturday 01 November 2025 13:41:47 +0000 (0:00:01.765) 0:00:58.521 ***** 2025-11-01 13:42:06.606684 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:42:06.606695 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:42:06.606705 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:42:06.606716 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:42:06.606727 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:42:06.606737 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:42:06.606748 | orchestrator | changed: [testbed-manager] 2025-11-01 13:42:06.606759 | orchestrator | 2025-11-01 13:42:06.606770 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-11-01 13:42:06.606780 | orchestrator | Saturday 01 November 2025 13:41:48 +0000 (0:00:01.175) 0:00:59.696 ***** 2025-11-01 13:42:06.606791 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:42:06.606802 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:42:06.606812 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:42:06.606823 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:42:06.606834 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:42:06.606844 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:42:06.606855 | orchestrator | changed: [testbed-manager] 2025-11-01 13:42:06.606866 | orchestrator | 2025-11-01 13:42:06.606876 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-11-01 13:42:06.606887 | orchestrator | Saturday 01 November 2025 13:42:03 +0000 (0:00:14.440) 0:01:14.137 ***** 2025-11-01 13:42:06.606898 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:42:06.606908 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:42:06.606919 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:42:06.606930 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:42:06.606940 | orchestrator | ok: [testbed-manager] 2025-11-01 13:42:06.606951 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:42:06.606962 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:42:06.606972 | orchestrator | 2025-11-01 13:42:06.606983 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-11-01 13:42:06.606994 | orchestrator | Saturday 01 November 2025 13:42:04 +0000 (0:00:01.371) 0:01:15.509 ***** 2025-11-01 13:42:06.607004 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:42:06.607015 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:42:06.607026 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:42:06.607036 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:42:06.607047 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:42:06.607058 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:42:06.607068 | orchestrator | ok: [testbed-manager] 2025-11-01 13:42:06.607079 | orchestrator | 2025-11-01 13:42:06.607090 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-11-01 13:42:06.607101 | orchestrator | Saturday 01 November 2025 13:42:05 +0000 (0:00:01.008) 0:01:16.517 ***** 2025-11-01 13:42:06.607111 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:42:06.607122 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:42:06.607133 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:42:06.607143 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:42:06.607154 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:42:06.607171 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:42:06.607181 | orchestrator | ok: [testbed-manager] 2025-11-01 13:42:06.607192 | orchestrator | 2025-11-01 13:42:06.607203 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-11-01 13:42:06.607214 | orchestrator | Saturday 01 November 2025 13:42:05 +0000 (0:00:00.262) 0:01:16.779 ***** 2025-11-01 13:42:06.607224 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:42:06.607235 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:42:06.607246 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:42:06.607256 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:42:06.607267 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:42:06.607278 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:42:06.607288 | orchestrator | ok: [testbed-manager] 2025-11-01 13:42:06.607299 | orchestrator | 2025-11-01 13:42:06.607310 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-11-01 13:42:06.607321 | orchestrator | Saturday 01 November 2025 13:42:06 +0000 (0:00:00.276) 0:01:17.056 ***** 2025-11-01 13:42:06.607332 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 13:42:06.607366 | orchestrator | 2025-11-01 13:42:06.607384 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-11-01 13:44:14.684876 | orchestrator | Saturday 01 November 2025 13:42:06 +0000 (0:00:00.375) 0:01:17.431 ***** 2025-11-01 13:44:14.684979 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:44:14.684992 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:44:14.685002 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:44:14.685011 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:44:14.685021 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:44:14.685031 | orchestrator | ok: [testbed-manager] 2025-11-01 13:44:14.685041 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:44:14.685051 | orchestrator | 2025-11-01 13:44:14.685061 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-11-01 13:44:14.685072 | orchestrator | Saturday 01 November 2025 13:42:08 +0000 (0:00:02.205) 0:01:19.636 ***** 2025-11-01 13:44:14.685081 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:44:14.685092 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:44:14.685101 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:44:14.685111 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:44:14.685120 | orchestrator | changed: [testbed-manager] 2025-11-01 13:44:14.685144 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:44:14.685155 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:44:14.685164 | orchestrator | 2025-11-01 13:44:14.685174 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-11-01 13:44:14.685185 | orchestrator | Saturday 01 November 2025 13:42:09 +0000 (0:00:00.720) 0:01:20.356 ***** 2025-11-01 13:44:14.685194 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:44:14.685204 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:44:14.685213 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:44:14.685223 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:44:14.685233 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:44:14.685242 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:44:14.685252 | orchestrator | ok: [testbed-manager] 2025-11-01 13:44:14.685261 | orchestrator | 2025-11-01 13:44:14.685271 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-11-01 13:44:14.685281 | orchestrator | Saturday 01 November 2025 13:42:09 +0000 (0:00:00.287) 0:01:20.643 ***** 2025-11-01 13:44:14.685290 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:44:14.685300 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:44:14.685309 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:44:14.685319 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:44:14.685328 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:44:14.685391 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:44:14.685402 | orchestrator | ok: [testbed-manager] 2025-11-01 13:44:14.685435 | orchestrator | 2025-11-01 13:44:14.685448 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-11-01 13:44:14.685459 | orchestrator | Saturday 01 November 2025 13:42:11 +0000 (0:00:01.356) 0:01:22.000 ***** 2025-11-01 13:44:14.685469 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:44:14.685480 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:44:14.685490 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:44:14.685501 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:44:14.685511 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:44:14.685521 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:44:14.685532 | orchestrator | changed: [testbed-manager] 2025-11-01 13:44:14.685543 | orchestrator | 2025-11-01 13:44:14.685553 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-11-01 13:44:14.685564 | orchestrator | Saturday 01 November 2025 13:42:12 +0000 (0:00:01.691) 0:01:23.692 ***** 2025-11-01 13:44:14.685575 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:44:14.685586 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:44:14.685597 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:44:14.685607 | orchestrator | ok: [testbed-manager] 2025-11-01 13:44:14.685617 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:44:14.685628 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:44:14.685639 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:44:14.685649 | orchestrator | 2025-11-01 13:44:14.685660 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-11-01 13:44:14.685671 | orchestrator | Saturday 01 November 2025 13:42:15 +0000 (0:00:02.300) 0:01:25.992 ***** 2025-11-01 13:44:14.685682 | orchestrator | ok: [testbed-manager] 2025-11-01 13:44:14.685692 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:44:14.685703 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:44:14.685713 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:44:14.685724 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:44:14.685734 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:44:14.685745 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:44:14.685755 | orchestrator | 2025-11-01 13:44:14.685766 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-11-01 13:44:14.685776 | orchestrator | Saturday 01 November 2025 13:42:46 +0000 (0:00:31.045) 0:01:57.038 ***** 2025-11-01 13:44:14.685785 | orchestrator | changed: [testbed-manager] 2025-11-01 13:44:14.685795 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:44:14.685804 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:44:14.685814 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:44:14.685823 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:44:14.685833 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:44:14.685842 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:44:14.685852 | orchestrator | 2025-11-01 13:44:14.685862 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-11-01 13:44:14.685871 | orchestrator | Saturday 01 November 2025 13:43:57 +0000 (0:01:11.519) 0:03:08.557 ***** 2025-11-01 13:44:14.685880 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:44:14.685890 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:44:14.685899 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:44:14.685909 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:44:14.685918 | orchestrator | ok: [testbed-manager] 2025-11-01 13:44:14.685928 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:44:14.685937 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:44:14.685947 | orchestrator | 2025-11-01 13:44:14.685956 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-11-01 13:44:14.685966 | orchestrator | Saturday 01 November 2025 13:43:59 +0000 (0:00:02.039) 0:03:10.597 ***** 2025-11-01 13:44:14.685975 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:44:14.685985 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:44:14.685994 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:44:14.686003 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:44:14.686013 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:44:14.686071 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:44:14.686089 | orchestrator | changed: [testbed-manager] 2025-11-01 13:44:14.686098 | orchestrator | 2025-11-01 13:44:14.686108 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-11-01 13:44:14.686118 | orchestrator | Saturday 01 November 2025 13:44:13 +0000 (0:00:13.585) 0:03:24.183 ***** 2025-11-01 13:44:14.686183 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-11-01 13:44:14.686200 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-11-01 13:44:14.686213 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-11-01 13:44:14.686230 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-11-01 13:44:14.686240 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-11-01 13:44:14.686250 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-11-01 13:44:14.686260 | orchestrator | 2025-11-01 13:44:14.686270 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-11-01 13:44:14.686279 | orchestrator | Saturday 01 November 2025 13:44:13 +0000 (0:00:00.487) 0:03:24.670 ***** 2025-11-01 13:44:14.686289 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-11-01 13:44:14.686298 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:44:14.686308 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-11-01 13:44:14.686317 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-11-01 13:44:14.686326 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:44:14.686356 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:44:14.686366 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-11-01 13:44:14.686376 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:44:14.686385 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-11-01 13:44:14.686395 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-11-01 13:44:14.686412 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-11-01 13:44:14.686421 | orchestrator | 2025-11-01 13:44:14.686431 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-11-01 13:44:14.686440 | orchestrator | Saturday 01 November 2025 13:44:14 +0000 (0:00:00.696) 0:03:25.367 ***** 2025-11-01 13:44:14.686450 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-11-01 13:44:14.686460 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-11-01 13:44:14.686477 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-11-01 13:44:14.686487 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-11-01 13:44:14.686499 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-11-01 13:44:14.686515 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-11-01 13:44:24.397601 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-11-01 13:44:24.397697 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-11-01 13:44:24.397711 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-11-01 13:44:24.397722 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-11-01 13:44:24.397732 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-11-01 13:44:24.397742 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-11-01 13:44:24.397757 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-11-01 13:44:24.397767 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-11-01 13:44:24.397777 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-11-01 13:44:24.397786 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-11-01 13:44:24.397796 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-11-01 13:44:24.397806 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-11-01 13:44:24.397815 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-11-01 13:44:24.397825 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-11-01 13:44:24.397834 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-11-01 13:44:24.397844 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-11-01 13:44:24.397853 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-11-01 13:44:24.397863 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-11-01 13:44:24.397872 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-11-01 13:44:24.397882 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-11-01 13:44:24.397891 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:44:24.397902 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-11-01 13:44:24.397911 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:44:24.397921 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-11-01 13:44:24.397946 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-11-01 13:44:24.397956 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-11-01 13:44:24.397966 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:44:24.397975 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-11-01 13:44:24.397985 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-11-01 13:44:24.397994 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-11-01 13:44:24.398004 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-11-01 13:44:24.398013 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-11-01 13:44:24.398078 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-11-01 13:44:24.398089 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-11-01 13:44:24.398098 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-11-01 13:44:24.398107 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-11-01 13:44:24.398117 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-11-01 13:44:24.398126 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:44:24.398136 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-11-01 13:44:24.398145 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-11-01 13:44:24.398157 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-11-01 13:44:24.398167 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-11-01 13:44:24.398178 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-11-01 13:44:24.398204 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-11-01 13:44:24.398216 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-11-01 13:44:24.398226 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-11-01 13:44:24.398237 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-11-01 13:44:24.398248 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-11-01 13:44:24.398258 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-11-01 13:44:24.398273 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-11-01 13:44:24.398286 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-11-01 13:44:24.398296 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-11-01 13:44:24.398307 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-11-01 13:44:24.398317 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-11-01 13:44:24.398328 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-11-01 13:44:24.398371 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-11-01 13:44:24.398383 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-11-01 13:44:24.398401 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-11-01 13:44:24.398412 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-11-01 13:44:24.398423 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-11-01 13:44:24.398433 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-11-01 13:44:24.398444 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-11-01 13:44:24.398455 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-11-01 13:44:24.398465 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-11-01 13:44:24.398476 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-11-01 13:44:24.398487 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-11-01 13:44:24.398497 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-11-01 13:44:24.398508 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-11-01 13:44:24.398518 | orchestrator | 2025-11-01 13:44:24.398528 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-11-01 13:44:24.398538 | orchestrator | Saturday 01 November 2025 13:44:22 +0000 (0:00:07.779) 0:03:33.147 ***** 2025-11-01 13:44:24.398547 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-01 13:44:24.398557 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-01 13:44:24.398566 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-01 13:44:24.398576 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-01 13:44:24.398585 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-01 13:44:24.398594 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-01 13:44:24.398604 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-01 13:44:24.398613 | orchestrator | 2025-11-01 13:44:24.398623 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-11-01 13:44:24.398632 | orchestrator | Saturday 01 November 2025 13:44:23 +0000 (0:00:01.501) 0:03:34.649 ***** 2025-11-01 13:44:24.398641 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-01 13:44:24.398651 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:44:24.398660 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-01 13:44:24.398670 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:44:24.398679 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-01 13:44:24.398688 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:44:24.398698 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-01 13:44:24.398707 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:44:24.398717 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-11-01 13:44:24.398726 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-11-01 13:44:24.398746 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-11-01 13:44:37.835435 | orchestrator | 2025-11-01 13:44:37.835530 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2025-11-01 13:44:37.835569 | orchestrator | Saturday 01 November 2025 13:44:24 +0000 (0:00:00.578) 0:03:35.227 ***** 2025-11-01 13:44:37.835581 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-01 13:44:37.835594 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:44:37.835606 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-01 13:44:37.835631 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-01 13:44:37.835642 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:44:37.835653 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:44:37.835664 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-01 13:44:37.835674 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:44:37.835685 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-11-01 13:44:37.835696 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-11-01 13:44:37.835707 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-11-01 13:44:37.835718 | orchestrator | 2025-11-01 13:44:37.835729 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-11-01 13:44:37.835740 | orchestrator | Saturday 01 November 2025 13:44:24 +0000 (0:00:00.468) 0:03:35.695 ***** 2025-11-01 13:44:37.835751 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-11-01 13:44:37.835762 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-11-01 13:44:37.835773 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:44:37.835784 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-11-01 13:44:37.835794 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:44:37.835805 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:44:37.835817 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-11-01 13:44:37.835827 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:44:37.835838 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-11-01 13:44:37.835849 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-11-01 13:44:37.835859 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-11-01 13:44:37.835870 | orchestrator | 2025-11-01 13:44:37.835881 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-11-01 13:44:37.835892 | orchestrator | Saturday 01 November 2025 13:44:25 +0000 (0:00:00.707) 0:03:36.403 ***** 2025-11-01 13:44:37.835903 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:44:37.835913 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:44:37.835924 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:44:37.835935 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:44:37.835945 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:44:37.835957 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:44:37.835969 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:44:37.835980 | orchestrator | 2025-11-01 13:44:37.835992 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-11-01 13:44:37.836004 | orchestrator | Saturday 01 November 2025 13:44:25 +0000 (0:00:00.369) 0:03:36.772 ***** 2025-11-01 13:44:37.836016 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:44:37.836029 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:44:37.836040 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:44:37.836053 | orchestrator | ok: [testbed-manager] 2025-11-01 13:44:37.836065 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:44:37.836082 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:44:37.836094 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:44:37.836107 | orchestrator | 2025-11-01 13:44:37.836119 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-11-01 13:44:37.836131 | orchestrator | Saturday 01 November 2025 13:44:31 +0000 (0:00:05.241) 0:03:42.014 ***** 2025-11-01 13:44:37.836144 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-11-01 13:44:37.836156 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:44:37.836168 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-11-01 13:44:37.836180 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:44:37.836192 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-11-01 13:44:37.836204 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:44:37.836216 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-11-01 13:44:37.836228 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-11-01 13:44:37.836240 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:44:37.836252 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-11-01 13:44:37.836264 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:44:37.836275 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:44:37.836288 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-11-01 13:44:37.836300 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:44:37.836311 | orchestrator | 2025-11-01 13:44:37.836322 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-11-01 13:44:37.836333 | orchestrator | Saturday 01 November 2025 13:44:31 +0000 (0:00:00.380) 0:03:42.394 ***** 2025-11-01 13:44:37.836362 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-11-01 13:44:37.836373 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-11-01 13:44:37.836384 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-11-01 13:44:37.836410 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-11-01 13:44:37.836421 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-11-01 13:44:37.836432 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-11-01 13:44:37.836443 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-11-01 13:44:37.836453 | orchestrator | 2025-11-01 13:44:37.836464 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-11-01 13:44:37.836475 | orchestrator | Saturday 01 November 2025 13:44:32 +0000 (0:00:01.131) 0:03:43.526 ***** 2025-11-01 13:44:37.836488 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 13:44:37.836501 | orchestrator | 2025-11-01 13:44:37.836512 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-11-01 13:44:37.836523 | orchestrator | Saturday 01 November 2025 13:44:33 +0000 (0:00:00.550) 0:03:44.076 ***** 2025-11-01 13:44:37.836534 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:44:37.836544 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:44:37.836555 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:44:37.836566 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:44:37.836576 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:44:37.836587 | orchestrator | ok: [testbed-manager] 2025-11-01 13:44:37.836597 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:44:37.836608 | orchestrator | 2025-11-01 13:44:37.836619 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-11-01 13:44:37.836629 | orchestrator | Saturday 01 November 2025 13:44:34 +0000 (0:00:01.402) 0:03:45.478 ***** 2025-11-01 13:44:37.836640 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:44:37.836651 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:44:37.836661 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:44:37.836672 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:44:37.836682 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:44:37.836693 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:44:37.836703 | orchestrator | ok: [testbed-manager] 2025-11-01 13:44:37.836724 | orchestrator | 2025-11-01 13:44:37.836735 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-11-01 13:44:37.836746 | orchestrator | Saturday 01 November 2025 13:44:35 +0000 (0:00:00.685) 0:03:46.163 ***** 2025-11-01 13:44:37.836757 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:44:37.836768 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:44:37.836778 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:44:37.836789 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:44:37.836799 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:44:37.836810 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:44:37.836821 | orchestrator | changed: [testbed-manager] 2025-11-01 13:44:37.836831 | orchestrator | 2025-11-01 13:44:37.836842 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-11-01 13:44:37.836853 | orchestrator | Saturday 01 November 2025 13:44:36 +0000 (0:00:00.718) 0:03:46.882 ***** 2025-11-01 13:44:37.836864 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:44:37.836874 | orchestrator | ok: [testbed-manager] 2025-11-01 13:44:37.836885 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:44:37.836903 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:44:37.836914 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:44:37.836925 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:44:37.836935 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:44:37.836946 | orchestrator | 2025-11-01 13:44:37.836956 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-11-01 13:44:37.836967 | orchestrator | Saturday 01 November 2025 13:44:36 +0000 (0:00:00.713) 0:03:47.595 ***** 2025-11-01 13:44:37.836981 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1762003240.1342385, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:44:37.836996 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1762003235.1530879, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:44:37.837008 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1762003234.3281558, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:44:37.837042 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1762003229.9078484, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:44:43.180550 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1762003221.5324938, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:44:43.180655 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1762003229.731439, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:44:43.180670 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1762003225.5304544, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:44:43.180680 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:44:43.180689 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:44:43.180698 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:44:43.180706 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:44:43.180741 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:44:43.180758 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:44:43.180768 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:44:43.180777 | orchestrator | 2025-11-01 13:44:43.180788 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-11-01 13:44:43.180799 | orchestrator | Saturday 01 November 2025 13:44:37 +0000 (0:00:01.067) 0:03:48.663 ***** 2025-11-01 13:44:43.180808 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:44:43.180818 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:44:43.180826 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:44:43.180835 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:44:43.180843 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:44:43.180851 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:44:43.180860 | orchestrator | changed: [testbed-manager] 2025-11-01 13:44:43.180868 | orchestrator | 2025-11-01 13:44:43.180877 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-11-01 13:44:43.180886 | orchestrator | Saturday 01 November 2025 13:44:38 +0000 (0:00:01.133) 0:03:49.797 ***** 2025-11-01 13:44:43.180894 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:44:43.180903 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:44:43.180911 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:44:43.180919 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:44:43.180928 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:44:43.180936 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:44:43.180945 | orchestrator | changed: [testbed-manager] 2025-11-01 13:44:43.180953 | orchestrator | 2025-11-01 13:44:43.180962 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-11-01 13:44:43.180970 | orchestrator | Saturday 01 November 2025 13:44:40 +0000 (0:00:01.136) 0:03:50.933 ***** 2025-11-01 13:44:43.180979 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:44:43.180987 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:44:43.180995 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:44:43.181004 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:44:43.181013 | orchestrator | changed: [testbed-manager] 2025-11-01 13:44:43.181021 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:44:43.181029 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:44:43.181038 | orchestrator | 2025-11-01 13:44:43.181047 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-11-01 13:44:43.181057 | orchestrator | Saturday 01 November 2025 13:44:41 +0000 (0:00:01.247) 0:03:52.180 ***** 2025-11-01 13:44:43.181071 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:44:43.181081 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:44:43.181091 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:44:43.181100 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:44:43.181109 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:44:43.181119 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:44:43.181128 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:44:43.181138 | orchestrator | 2025-11-01 13:44:43.181148 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-11-01 13:44:43.181158 | orchestrator | Saturday 01 November 2025 13:44:41 +0000 (0:00:00.298) 0:03:52.479 ***** 2025-11-01 13:44:43.181167 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:44:43.181178 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:44:43.181187 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:44:43.181197 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:44:43.181206 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:44:43.181215 | orchestrator | ok: [testbed-manager] 2025-11-01 13:44:43.181225 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:44:43.181234 | orchestrator | 2025-11-01 13:44:43.181243 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-11-01 13:44:43.181254 | orchestrator | Saturday 01 November 2025 13:44:42 +0000 (0:00:00.897) 0:03:53.377 ***** 2025-11-01 13:44:43.181266 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 13:44:43.181277 | orchestrator | 2025-11-01 13:44:43.181287 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-11-01 13:44:43.181306 | orchestrator | Saturday 01 November 2025 13:44:43 +0000 (0:00:00.633) 0:03:54.010 ***** 2025-11-01 13:46:06.138004 | orchestrator | ok: [testbed-manager] 2025-11-01 13:46:06.138201 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:46:06.138219 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:46:06.138231 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:46:06.138242 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:46:06.138253 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:46:06.138263 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:46:06.138275 | orchestrator | 2025-11-01 13:46:06.138288 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-11-01 13:46:06.138300 | orchestrator | Saturday 01 November 2025 13:44:53 +0000 (0:00:09.893) 0:04:03.904 ***** 2025-11-01 13:46:06.138311 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:46:06.138322 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:46:06.138332 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:46:06.138392 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:46:06.138405 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:46:06.138415 | orchestrator | ok: [testbed-manager] 2025-11-01 13:46:06.138426 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:46:06.138437 | orchestrator | 2025-11-01 13:46:06.138447 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-11-01 13:46:06.138458 | orchestrator | Saturday 01 November 2025 13:44:54 +0000 (0:00:01.368) 0:04:05.273 ***** 2025-11-01 13:46:06.138469 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:46:06.138480 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:46:06.138490 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:46:06.138501 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:46:06.138511 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:46:06.138522 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:46:06.138533 | orchestrator | ok: [testbed-manager] 2025-11-01 13:46:06.138544 | orchestrator | 2025-11-01 13:46:06.138557 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-11-01 13:46:06.138571 | orchestrator | Saturday 01 November 2025 13:44:56 +0000 (0:00:02.083) 0:04:07.356 ***** 2025-11-01 13:46:06.138583 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:46:06.138595 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:46:06.138632 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:46:06.138645 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:46:06.138657 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:46:06.138670 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:46:06.138682 | orchestrator | ok: [testbed-manager] 2025-11-01 13:46:06.138693 | orchestrator | 2025-11-01 13:46:06.138706 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-11-01 13:46:06.138734 | orchestrator | Saturday 01 November 2025 13:44:56 +0000 (0:00:00.325) 0:04:07.681 ***** 2025-11-01 13:46:06.138746 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:46:06.138758 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:46:06.138770 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:46:06.138782 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:46:06.138794 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:46:06.138806 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:46:06.138818 | orchestrator | ok: [testbed-manager] 2025-11-01 13:46:06.138830 | orchestrator | 2025-11-01 13:46:06.138842 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-11-01 13:46:06.138855 | orchestrator | Saturday 01 November 2025 13:44:57 +0000 (0:00:00.358) 0:04:08.040 ***** 2025-11-01 13:46:06.138868 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:46:06.138880 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:46:06.138892 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:46:06.138903 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:46:06.138914 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:46:06.138924 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:46:06.138935 | orchestrator | ok: [testbed-manager] 2025-11-01 13:46:06.138945 | orchestrator | 2025-11-01 13:46:06.138956 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-11-01 13:46:06.138967 | orchestrator | Saturday 01 November 2025 13:44:57 +0000 (0:00:00.354) 0:04:08.394 ***** 2025-11-01 13:46:06.138977 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:46:06.138988 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:46:06.138998 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:46:06.139009 | orchestrator | ok: [testbed-manager] 2025-11-01 13:46:06.139019 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:46:06.139030 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:46:06.139040 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:46:06.139051 | orchestrator | 2025-11-01 13:46:06.139061 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-11-01 13:46:06.139072 | orchestrator | Saturday 01 November 2025 13:45:02 +0000 (0:00:05.152) 0:04:13.546 ***** 2025-11-01 13:46:06.139086 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 13:46:06.139099 | orchestrator | 2025-11-01 13:46:06.139110 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-11-01 13:46:06.139121 | orchestrator | Saturday 01 November 2025 13:45:03 +0000 (0:00:00.449) 0:04:13.996 ***** 2025-11-01 13:46:06.139131 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-11-01 13:46:06.139142 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-11-01 13:46:06.139153 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-11-01 13:46:06.139163 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-11-01 13:46:06.139174 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:46:06.139184 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-11-01 13:46:06.139195 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:46:06.139206 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-11-01 13:46:06.139217 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-11-01 13:46:06.139227 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:46:06.139238 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-11-01 13:46:06.139257 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:46:06.139268 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-11-01 13:46:06.139278 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-11-01 13:46:06.139289 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-11-01 13:46:06.139300 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:46:06.139328 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-11-01 13:46:06.139356 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:46:06.139368 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-11-01 13:46:06.139379 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-11-01 13:46:06.139389 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:46:06.139400 | orchestrator | 2025-11-01 13:46:06.139411 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-11-01 13:46:06.139421 | orchestrator | Saturday 01 November 2025 13:45:03 +0000 (0:00:00.356) 0:04:14.353 ***** 2025-11-01 13:46:06.139433 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 13:46:06.139444 | orchestrator | 2025-11-01 13:46:06.139455 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-11-01 13:46:06.139465 | orchestrator | Saturday 01 November 2025 13:45:03 +0000 (0:00:00.422) 0:04:14.776 ***** 2025-11-01 13:46:06.139476 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-11-01 13:46:06.139487 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-11-01 13:46:06.139497 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:46:06.139508 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-11-01 13:46:06.139519 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:46:06.139529 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-11-01 13:46:06.139540 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:46:06.139550 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-11-01 13:46:06.139561 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:46:06.139571 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-11-01 13:46:06.139582 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:46:06.139593 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:46:06.139603 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-11-01 13:46:06.139614 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:46:06.139624 | orchestrator | 2025-11-01 13:46:06.139635 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-11-01 13:46:06.139662 | orchestrator | Saturday 01 November 2025 13:45:04 +0000 (0:00:00.363) 0:04:15.140 ***** 2025-11-01 13:46:06.139674 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 13:46:06.139685 | orchestrator | 2025-11-01 13:46:06.139695 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-11-01 13:46:06.139706 | orchestrator | Saturday 01 November 2025 13:45:04 +0000 (0:00:00.460) 0:04:15.601 ***** 2025-11-01 13:46:06.139717 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:46:06.139727 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:46:06.139738 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:46:06.139748 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:46:06.139759 | orchestrator | changed: [testbed-manager] 2025-11-01 13:46:06.139769 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:46:06.139780 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:46:06.139790 | orchestrator | 2025-11-01 13:46:06.139801 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-11-01 13:46:06.139819 | orchestrator | Saturday 01 November 2025 13:45:39 +0000 (0:00:34.382) 0:04:49.983 ***** 2025-11-01 13:46:06.139830 | orchestrator | changed: [testbed-manager] 2025-11-01 13:46:06.139840 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:46:06.139851 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:46:06.139861 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:46:06.139872 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:46:06.139882 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:46:06.139892 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:46:06.139903 | orchestrator | 2025-11-01 13:46:06.139914 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-11-01 13:46:06.139924 | orchestrator | Saturday 01 November 2025 13:45:48 +0000 (0:00:09.252) 0:04:59.236 ***** 2025-11-01 13:46:06.139935 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:46:06.139945 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:46:06.139956 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:46:06.139966 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:46:06.139977 | orchestrator | changed: [testbed-manager] 2025-11-01 13:46:06.139987 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:46:06.139998 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:46:06.140008 | orchestrator | 2025-11-01 13:46:06.140019 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-11-01 13:46:06.140030 | orchestrator | Saturday 01 November 2025 13:45:57 +0000 (0:00:08.759) 0:05:07.996 ***** 2025-11-01 13:46:06.140040 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:46:06.140051 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:46:06.140061 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:46:06.140072 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:46:06.140082 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:46:06.140093 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:46:06.140103 | orchestrator | ok: [testbed-manager] 2025-11-01 13:46:06.140114 | orchestrator | 2025-11-01 13:46:06.140125 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-11-01 13:46:06.140135 | orchestrator | Saturday 01 November 2025 13:45:59 +0000 (0:00:01.914) 0:05:09.911 ***** 2025-11-01 13:46:06.140146 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:46:06.140157 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:46:06.140167 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:46:06.140178 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:46:06.140188 | orchestrator | changed: [testbed-manager] 2025-11-01 13:46:06.140199 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:46:06.140209 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:46:06.140219 | orchestrator | 2025-11-01 13:46:06.140241 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-11-01 13:46:18.409025 | orchestrator | Saturday 01 November 2025 13:46:06 +0000 (0:00:07.050) 0:05:16.962 ***** 2025-11-01 13:46:18.409135 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 13:46:18.409153 | orchestrator | 2025-11-01 13:46:18.409166 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-11-01 13:46:18.409177 | orchestrator | Saturday 01 November 2025 13:46:06 +0000 (0:00:00.456) 0:05:17.418 ***** 2025-11-01 13:46:18.409188 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:46:18.409200 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:46:18.409210 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:46:18.409221 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:46:18.409232 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:46:18.409242 | orchestrator | changed: [testbed-manager] 2025-11-01 13:46:18.409253 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:46:18.409264 | orchestrator | 2025-11-01 13:46:18.409275 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-11-01 13:46:18.409308 | orchestrator | Saturday 01 November 2025 13:46:07 +0000 (0:00:00.872) 0:05:18.290 ***** 2025-11-01 13:46:18.409320 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:46:18.409331 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:46:18.409392 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:46:18.409404 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:46:18.409414 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:46:18.409425 | orchestrator | ok: [testbed-manager] 2025-11-01 13:46:18.409436 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:46:18.409446 | orchestrator | 2025-11-01 13:46:18.409457 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-11-01 13:46:18.409467 | orchestrator | Saturday 01 November 2025 13:46:09 +0000 (0:00:01.871) 0:05:20.162 ***** 2025-11-01 13:46:18.409479 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:46:18.409490 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:46:18.409501 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:46:18.409511 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:46:18.409522 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:46:18.409532 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:46:18.409543 | orchestrator | changed: [testbed-manager] 2025-11-01 13:46:18.409553 | orchestrator | 2025-11-01 13:46:18.409564 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-11-01 13:46:18.409576 | orchestrator | Saturday 01 November 2025 13:46:10 +0000 (0:00:00.883) 0:05:21.045 ***** 2025-11-01 13:46:18.409588 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:46:18.409600 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:46:18.409612 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:46:18.409623 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:46:18.409635 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:46:18.409647 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:46:18.409659 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:46:18.409670 | orchestrator | 2025-11-01 13:46:18.409682 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-11-01 13:46:18.409694 | orchestrator | Saturday 01 November 2025 13:46:10 +0000 (0:00:00.370) 0:05:21.416 ***** 2025-11-01 13:46:18.409706 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:46:18.409718 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:46:18.409730 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:46:18.409741 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:46:18.409753 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:46:18.409765 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:46:18.409777 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:46:18.409789 | orchestrator | 2025-11-01 13:46:18.409801 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-11-01 13:46:18.409813 | orchestrator | Saturday 01 November 2025 13:46:11 +0000 (0:00:00.429) 0:05:21.846 ***** 2025-11-01 13:46:18.409825 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:46:18.409837 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:46:18.409848 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:46:18.409861 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:46:18.409873 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:46:18.409884 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:46:18.409896 | orchestrator | ok: [testbed-manager] 2025-11-01 13:46:18.409908 | orchestrator | 2025-11-01 13:46:18.409921 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-11-01 13:46:18.409932 | orchestrator | Saturday 01 November 2025 13:46:11 +0000 (0:00:00.359) 0:05:22.206 ***** 2025-11-01 13:46:18.409942 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:46:18.409953 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:46:18.409963 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:46:18.409974 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:46:18.409984 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:46:18.409995 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:46:18.410005 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:46:18.410079 | orchestrator | 2025-11-01 13:46:18.410094 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-11-01 13:46:18.410106 | orchestrator | Saturday 01 November 2025 13:46:11 +0000 (0:00:00.326) 0:05:22.532 ***** 2025-11-01 13:46:18.410117 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:46:18.410127 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:46:18.410138 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:46:18.410149 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:46:18.410159 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:46:18.410170 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:46:18.410180 | orchestrator | ok: [testbed-manager] 2025-11-01 13:46:18.410191 | orchestrator | 2025-11-01 13:46:18.410201 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-11-01 13:46:18.410212 | orchestrator | Saturday 01 November 2025 13:46:12 +0000 (0:00:00.387) 0:05:22.920 ***** 2025-11-01 13:46:18.410223 | orchestrator | ok: [testbed-node-0] =>  2025-11-01 13:46:18.410233 | orchestrator |  docker_version: 5:27.5.1 2025-11-01 13:46:18.410244 | orchestrator | ok: [testbed-node-1] =>  2025-11-01 13:46:18.410254 | orchestrator |  docker_version: 5:27.5.1 2025-11-01 13:46:18.410265 | orchestrator | ok: [testbed-node-2] =>  2025-11-01 13:46:18.410276 | orchestrator |  docker_version: 5:27.5.1 2025-11-01 13:46:18.410299 | orchestrator | ok: [testbed-node-3] =>  2025-11-01 13:46:18.410310 | orchestrator |  docker_version: 5:27.5.1 2025-11-01 13:46:18.410338 | orchestrator | ok: [testbed-node-4] =>  2025-11-01 13:46:18.410384 | orchestrator |  docker_version: 5:27.5.1 2025-11-01 13:46:18.410403 | orchestrator | ok: [testbed-node-5] =>  2025-11-01 13:46:18.410419 | orchestrator |  docker_version: 5:27.5.1 2025-11-01 13:46:18.410434 | orchestrator | ok: [testbed-manager] =>  2025-11-01 13:46:18.410449 | orchestrator |  docker_version: 5:27.5.1 2025-11-01 13:46:18.410467 | orchestrator | 2025-11-01 13:46:18.410486 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-11-01 13:46:18.410503 | orchestrator | Saturday 01 November 2025 13:46:12 +0000 (0:00:00.340) 0:05:23.261 ***** 2025-11-01 13:46:18.410520 | orchestrator | ok: [testbed-node-0] =>  2025-11-01 13:46:18.410531 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-01 13:46:18.410542 | orchestrator | ok: [testbed-node-1] =>  2025-11-01 13:46:18.410552 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-01 13:46:18.410562 | orchestrator | ok: [testbed-node-2] =>  2025-11-01 13:46:18.410573 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-01 13:46:18.410583 | orchestrator | ok: [testbed-node-3] =>  2025-11-01 13:46:18.410594 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-01 13:46:18.410604 | orchestrator | ok: [testbed-node-4] =>  2025-11-01 13:46:18.410615 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-01 13:46:18.410625 | orchestrator | ok: [testbed-node-5] =>  2025-11-01 13:46:18.410636 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-01 13:46:18.410646 | orchestrator | ok: [testbed-manager] =>  2025-11-01 13:46:18.410656 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-01 13:46:18.410667 | orchestrator | 2025-11-01 13:46:18.410677 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-11-01 13:46:18.410688 | orchestrator | Saturday 01 November 2025 13:46:12 +0000 (0:00:00.336) 0:05:23.597 ***** 2025-11-01 13:46:18.410699 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:46:18.410709 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:46:18.410720 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:46:18.410730 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:46:18.410740 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:46:18.410751 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:46:18.410761 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:46:18.410772 | orchestrator | 2025-11-01 13:46:18.410782 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-11-01 13:46:18.410793 | orchestrator | Saturday 01 November 2025 13:46:13 +0000 (0:00:00.446) 0:05:24.043 ***** 2025-11-01 13:46:18.410803 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:46:18.410822 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:46:18.410832 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:46:18.410843 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:46:18.410853 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:46:18.410864 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:46:18.410874 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:46:18.410885 | orchestrator | 2025-11-01 13:46:18.410895 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-11-01 13:46:18.410906 | orchestrator | Saturday 01 November 2025 13:46:13 +0000 (0:00:00.312) 0:05:24.356 ***** 2025-11-01 13:46:18.410918 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 13:46:18.410930 | orchestrator | 2025-11-01 13:46:18.410941 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-11-01 13:46:18.410952 | orchestrator | Saturday 01 November 2025 13:46:13 +0000 (0:00:00.456) 0:05:24.813 ***** 2025-11-01 13:46:18.410962 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:46:18.410973 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:46:18.410983 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:46:18.410994 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:46:18.411004 | orchestrator | ok: [testbed-manager] 2025-11-01 13:46:18.411015 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:46:18.411025 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:46:18.411036 | orchestrator | 2025-11-01 13:46:18.411047 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-11-01 13:46:18.411057 | orchestrator | Saturday 01 November 2025 13:46:14 +0000 (0:00:00.847) 0:05:25.660 ***** 2025-11-01 13:46:18.411068 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:46:18.411078 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:46:18.411088 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:46:18.411099 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:46:18.411109 | orchestrator | ok: [testbed-manager] 2025-11-01 13:46:18.411120 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:46:18.411130 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:46:18.411140 | orchestrator | 2025-11-01 13:46:18.411151 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-11-01 13:46:18.411163 | orchestrator | Saturday 01 November 2025 13:46:17 +0000 (0:00:03.057) 0:05:28.718 ***** 2025-11-01 13:46:18.411174 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-11-01 13:46:18.411185 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-11-01 13:46:18.411195 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-11-01 13:46:18.411206 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:46:18.411216 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-11-01 13:46:18.411227 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-11-01 13:46:18.411237 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-11-01 13:46:18.411247 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-11-01 13:46:18.411258 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-11-01 13:46:18.411268 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-11-01 13:46:18.411278 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:46:18.411289 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-11-01 13:46:18.411299 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-11-01 13:46:18.411310 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-11-01 13:46:18.411320 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:46:18.411336 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-11-01 13:46:18.411370 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:46:18.411389 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-11-01 13:47:24.994089 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-11-01 13:47:24.994220 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-11-01 13:47:24.994244 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-11-01 13:47:24.994262 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-11-01 13:47:24.994279 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:47:24.994299 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:47:24.994316 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-11-01 13:47:24.994335 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-11-01 13:47:24.994386 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-11-01 13:47:24.994403 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:47:24.994421 | orchestrator | 2025-11-01 13:47:24.994441 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-11-01 13:47:24.994462 | orchestrator | Saturday 01 November 2025 13:46:18 +0000 (0:00:00.675) 0:05:29.394 ***** 2025-11-01 13:47:24.994481 | orchestrator | ok: [testbed-manager] 2025-11-01 13:47:24.994500 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:47:24.994519 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:47:24.994539 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:47:24.994558 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:47:24.994578 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:47:24.994597 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:47:24.994617 | orchestrator | 2025-11-01 13:47:24.994637 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-11-01 13:47:24.994657 | orchestrator | Saturday 01 November 2025 13:46:26 +0000 (0:00:07.994) 0:05:37.388 ***** 2025-11-01 13:47:24.994677 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:47:24.994696 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:47:24.994715 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:47:24.994732 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:47:24.994748 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:47:24.994767 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:47:24.994785 | orchestrator | ok: [testbed-manager] 2025-11-01 13:47:24.994803 | orchestrator | 2025-11-01 13:47:24.994820 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-11-01 13:47:24.994836 | orchestrator | Saturday 01 November 2025 13:46:27 +0000 (0:00:01.144) 0:05:38.533 ***** 2025-11-01 13:47:24.994852 | orchestrator | ok: [testbed-manager] 2025-11-01 13:47:24.994868 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:47:24.994885 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:47:24.994901 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:47:24.994917 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:47:24.994935 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:47:24.994950 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:47:24.994966 | orchestrator | 2025-11-01 13:47:24.994983 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-11-01 13:47:24.995001 | orchestrator | Saturday 01 November 2025 13:46:37 +0000 (0:00:09.532) 0:05:48.065 ***** 2025-11-01 13:47:24.995018 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:47:24.995035 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:47:24.995051 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:47:24.995068 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:47:24.995084 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:47:24.995099 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:47:24.995116 | orchestrator | changed: [testbed-manager] 2025-11-01 13:47:24.995132 | orchestrator | 2025-11-01 13:47:24.995149 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-11-01 13:47:24.995166 | orchestrator | Saturday 01 November 2025 13:46:40 +0000 (0:00:03.331) 0:05:51.396 ***** 2025-11-01 13:47:24.995182 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:47:24.995199 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:47:24.995217 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:47:24.995272 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:47:24.995293 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:47:24.995312 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:47:24.995329 | orchestrator | ok: [testbed-manager] 2025-11-01 13:47:24.995378 | orchestrator | 2025-11-01 13:47:24.995398 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-11-01 13:47:24.995415 | orchestrator | Saturday 01 November 2025 13:46:42 +0000 (0:00:01.532) 0:05:52.928 ***** 2025-11-01 13:47:24.995433 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:47:24.995452 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:47:24.995471 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:47:24.995488 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:47:24.995505 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:47:24.995523 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:47:24.995541 | orchestrator | ok: [testbed-manager] 2025-11-01 13:47:24.995558 | orchestrator | 2025-11-01 13:47:24.995576 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-11-01 13:47:24.995595 | orchestrator | Saturday 01 November 2025 13:46:43 +0000 (0:00:01.514) 0:05:54.443 ***** 2025-11-01 13:47:24.995612 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:47:24.995630 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:47:24.995648 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:47:24.995665 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:47:24.995684 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:47:24.995703 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:47:24.995721 | orchestrator | changed: [testbed-manager] 2025-11-01 13:47:24.995737 | orchestrator | 2025-11-01 13:47:24.995754 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-11-01 13:47:24.995769 | orchestrator | Saturday 01 November 2025 13:46:44 +0000 (0:00:01.200) 0:05:55.643 ***** 2025-11-01 13:47:24.995787 | orchestrator | ok: [testbed-manager] 2025-11-01 13:47:24.995805 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:47:24.995824 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:47:24.995840 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:47:24.995850 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:47:24.995861 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:47:24.995871 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:47:24.995882 | orchestrator | 2025-11-01 13:47:24.995893 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-11-01 13:47:24.995904 | orchestrator | Saturday 01 November 2025 13:46:55 +0000 (0:00:11.085) 0:06:06.729 ***** 2025-11-01 13:47:24.995942 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:47:24.995953 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:47:24.995964 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:47:24.995975 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:47:24.995985 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:47:24.995996 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:47:24.996006 | orchestrator | changed: [testbed-manager] 2025-11-01 13:47:24.996017 | orchestrator | 2025-11-01 13:47:24.996028 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-11-01 13:47:24.996039 | orchestrator | Saturday 01 November 2025 13:46:56 +0000 (0:00:00.983) 0:06:07.712 ***** 2025-11-01 13:47:24.996049 | orchestrator | ok: [testbed-manager] 2025-11-01 13:47:24.996060 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:47:24.996070 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:47:24.996080 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:47:24.996091 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:47:24.996101 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:47:24.996111 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:47:24.996122 | orchestrator | 2025-11-01 13:47:24.996132 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-11-01 13:47:24.996143 | orchestrator | Saturday 01 November 2025 13:47:06 +0000 (0:00:09.958) 0:06:17.671 ***** 2025-11-01 13:47:24.996169 | orchestrator | ok: [testbed-manager] 2025-11-01 13:47:24.996179 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:47:24.996190 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:47:24.996201 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:47:24.996211 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:47:24.996221 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:47:24.996232 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:47:24.996242 | orchestrator | 2025-11-01 13:47:24.996253 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-11-01 13:47:24.996263 | orchestrator | Saturday 01 November 2025 13:47:18 +0000 (0:00:11.272) 0:06:28.943 ***** 2025-11-01 13:47:24.996274 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-11-01 13:47:24.996285 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-11-01 13:47:24.996295 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-11-01 13:47:24.996306 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-11-01 13:47:24.996316 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-11-01 13:47:24.996327 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-11-01 13:47:24.996337 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-11-01 13:47:24.996380 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-11-01 13:47:24.996399 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-11-01 13:47:24.996417 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-11-01 13:47:24.996428 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-11-01 13:47:24.996439 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-11-01 13:47:24.996450 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-11-01 13:47:24.996460 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-11-01 13:47:24.996470 | orchestrator | 2025-11-01 13:47:24.996481 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-11-01 13:47:24.996492 | orchestrator | Saturday 01 November 2025 13:47:19 +0000 (0:00:01.268) 0:06:30.212 ***** 2025-11-01 13:47:24.996502 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:47:24.996512 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:47:24.996523 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:47:24.996533 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:47:24.996544 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:47:24.996554 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:47:24.996564 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:47:24.996575 | orchestrator | 2025-11-01 13:47:24.996586 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-11-01 13:47:24.996596 | orchestrator | Saturday 01 November 2025 13:47:19 +0000 (0:00:00.621) 0:06:30.833 ***** 2025-11-01 13:47:24.996607 | orchestrator | ok: [testbed-manager] 2025-11-01 13:47:24.996618 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:47:24.996628 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:47:24.996638 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:47:24.996649 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:47:24.996659 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:47:24.996669 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:47:24.996680 | orchestrator | 2025-11-01 13:47:24.996690 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-11-01 13:47:24.996703 | orchestrator | Saturday 01 November 2025 13:47:23 +0000 (0:00:03.939) 0:06:34.773 ***** 2025-11-01 13:47:24.996713 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:47:24.996723 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:47:24.996734 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:47:24.996744 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:47:24.996754 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:47:24.996765 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:47:24.996775 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:47:24.996785 | orchestrator | 2025-11-01 13:47:24.996805 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-11-01 13:47:24.996816 | orchestrator | Saturday 01 November 2025 13:47:24 +0000 (0:00:00.743) 0:06:35.516 ***** 2025-11-01 13:47:24.996826 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-11-01 13:47:24.996837 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-11-01 13:47:24.996847 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:47:24.996858 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-11-01 13:47:24.996868 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-11-01 13:47:24.996879 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:47:24.996936 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-11-01 13:47:24.996948 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-11-01 13:47:24.996963 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:47:24.996974 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-11-01 13:47:24.996994 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-11-01 13:47:45.974804 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:47:45.974891 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-11-01 13:47:45.974901 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-11-01 13:47:45.974908 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:47:45.974915 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-11-01 13:47:45.974922 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-11-01 13:47:45.974929 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:47:45.974936 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-11-01 13:47:45.974943 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-11-01 13:47:45.974950 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:47:45.974956 | orchestrator | 2025-11-01 13:47:45.974964 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-11-01 13:47:45.974972 | orchestrator | Saturday 01 November 2025 13:47:25 +0000 (0:00:00.576) 0:06:36.093 ***** 2025-11-01 13:47:45.974979 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:47:45.974985 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:47:45.974992 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:47:45.974999 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:47:45.975005 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:47:45.975012 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:47:45.975018 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:47:45.975025 | orchestrator | 2025-11-01 13:47:45.975032 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-11-01 13:47:45.975039 | orchestrator | Saturday 01 November 2025 13:47:25 +0000 (0:00:00.538) 0:06:36.632 ***** 2025-11-01 13:47:45.975045 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:47:45.975052 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:47:45.975058 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:47:45.975065 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:47:45.975071 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:47:45.975078 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:47:45.975084 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:47:45.975091 | orchestrator | 2025-11-01 13:47:45.975097 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-11-01 13:47:45.975104 | orchestrator | Saturday 01 November 2025 13:47:26 +0000 (0:00:00.565) 0:06:37.197 ***** 2025-11-01 13:47:45.975111 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:47:45.975117 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:47:45.975124 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:47:45.975130 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:47:45.975137 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:47:45.975143 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:47:45.975168 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:47:45.975175 | orchestrator | 2025-11-01 13:47:45.975182 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-11-01 13:47:45.975189 | orchestrator | Saturday 01 November 2025 13:47:26 +0000 (0:00:00.590) 0:06:37.788 ***** 2025-11-01 13:47:45.975195 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:47:45.975202 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:47:45.975208 | orchestrator | ok: [testbed-manager] 2025-11-01 13:47:45.975215 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:47:45.975221 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:47:45.975228 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:47:45.975234 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:47:45.975241 | orchestrator | 2025-11-01 13:47:45.975248 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-11-01 13:47:45.975254 | orchestrator | Saturday 01 November 2025 13:47:28 +0000 (0:00:02.010) 0:06:39.799 ***** 2025-11-01 13:47:45.975262 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 13:47:45.975271 | orchestrator | 2025-11-01 13:47:45.975278 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-11-01 13:47:45.975284 | orchestrator | Saturday 01 November 2025 13:47:29 +0000 (0:00:00.942) 0:06:40.741 ***** 2025-11-01 13:47:45.975291 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:47:45.975298 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:47:45.975304 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:47:45.975311 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:47:45.975317 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:47:45.975324 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:47:45.975330 | orchestrator | ok: [testbed-manager] 2025-11-01 13:47:45.975337 | orchestrator | 2025-11-01 13:47:45.975376 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-11-01 13:47:45.975385 | orchestrator | Saturday 01 November 2025 13:47:30 +0000 (0:00:00.824) 0:06:41.565 ***** 2025-11-01 13:47:45.975393 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:47:45.975400 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:47:45.975408 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:47:45.975415 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:47:45.975423 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:47:45.975430 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:47:45.975437 | orchestrator | ok: [testbed-manager] 2025-11-01 13:47:45.975445 | orchestrator | 2025-11-01 13:47:45.975452 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-11-01 13:47:45.975459 | orchestrator | Saturday 01 November 2025 13:47:31 +0000 (0:00:01.121) 0:06:42.687 ***** 2025-11-01 13:47:45.975467 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:47:45.975474 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:47:45.975481 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:47:45.975489 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:47:45.975496 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:47:45.975503 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:47:45.975511 | orchestrator | ok: [testbed-manager] 2025-11-01 13:47:45.975519 | orchestrator | 2025-11-01 13:47:45.975526 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-11-01 13:47:45.975534 | orchestrator | Saturday 01 November 2025 13:47:33 +0000 (0:00:01.462) 0:06:44.150 ***** 2025-11-01 13:47:45.975553 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:47:45.975561 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:47:45.975568 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:47:45.975576 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:47:45.975584 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:47:45.975591 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:47:45.975599 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:47:45.975613 | orchestrator | 2025-11-01 13:47:45.975620 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-11-01 13:47:45.975628 | orchestrator | Saturday 01 November 2025 13:47:34 +0000 (0:00:01.336) 0:06:45.486 ***** 2025-11-01 13:47:45.975635 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:47:45.975643 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:47:45.975650 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:47:45.975658 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:47:45.975665 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:47:45.975673 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:47:45.975680 | orchestrator | ok: [testbed-manager] 2025-11-01 13:47:45.975687 | orchestrator | 2025-11-01 13:47:45.975694 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-11-01 13:47:45.975703 | orchestrator | Saturday 01 November 2025 13:47:35 +0000 (0:00:01.336) 0:06:46.823 ***** 2025-11-01 13:47:45.975710 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:47:45.975717 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:47:45.975724 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:47:45.975731 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:47:45.975737 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:47:45.975744 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:47:45.975751 | orchestrator | changed: [testbed-manager] 2025-11-01 13:47:45.975757 | orchestrator | 2025-11-01 13:47:45.975764 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-11-01 13:47:45.975770 | orchestrator | Saturday 01 November 2025 13:47:37 +0000 (0:00:01.470) 0:06:48.293 ***** 2025-11-01 13:47:45.975777 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 13:47:45.975784 | orchestrator | 2025-11-01 13:47:45.975791 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-11-01 13:47:45.975797 | orchestrator | Saturday 01 November 2025 13:47:38 +0000 (0:00:01.124) 0:06:49.417 ***** 2025-11-01 13:47:45.975804 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:47:45.975810 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:47:45.975817 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:47:45.975824 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:47:45.975830 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:47:45.975837 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:47:45.975843 | orchestrator | ok: [testbed-manager] 2025-11-01 13:47:45.975850 | orchestrator | 2025-11-01 13:47:45.975856 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-11-01 13:47:45.975863 | orchestrator | Saturday 01 November 2025 13:47:40 +0000 (0:00:01.644) 0:06:51.062 ***** 2025-11-01 13:47:45.975870 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:47:45.975876 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:47:45.975883 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:47:45.975889 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:47:45.975896 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:47:45.975903 | orchestrator | ok: [testbed-manager] 2025-11-01 13:47:45.975909 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:47:45.975916 | orchestrator | 2025-11-01 13:47:45.975923 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-11-01 13:47:45.975929 | orchestrator | Saturday 01 November 2025 13:47:42 +0000 (0:00:01.838) 0:06:52.900 ***** 2025-11-01 13:47:45.975936 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:47:45.975943 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:47:45.975949 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:47:45.975956 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:47:45.975962 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:47:45.975969 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:47:45.975975 | orchestrator | ok: [testbed-manager] 2025-11-01 13:47:45.975982 | orchestrator | 2025-11-01 13:47:45.975988 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-11-01 13:47:45.976005 | orchestrator | Saturday 01 November 2025 13:47:43 +0000 (0:00:01.401) 0:06:54.302 ***** 2025-11-01 13:47:45.976012 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:47:45.976019 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:47:45.976025 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:47:45.976032 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:47:45.976038 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:47:45.976045 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:47:45.976052 | orchestrator | ok: [testbed-manager] 2025-11-01 13:47:45.976058 | orchestrator | 2025-11-01 13:47:45.976065 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-11-01 13:47:45.976072 | orchestrator | Saturday 01 November 2025 13:47:44 +0000 (0:00:01.249) 0:06:55.552 ***** 2025-11-01 13:47:45.976078 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 13:47:45.976085 | orchestrator | 2025-11-01 13:47:45.976092 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-01 13:47:45.976098 | orchestrator | Saturday 01 November 2025 13:47:45 +0000 (0:00:00.927) 0:06:56.479 ***** 2025-11-01 13:47:45.976105 | orchestrator | 2025-11-01 13:47:45.976112 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-01 13:47:45.976118 | orchestrator | Saturday 01 November 2025 13:47:45 +0000 (0:00:00.040) 0:06:56.520 ***** 2025-11-01 13:47:45.976125 | orchestrator | 2025-11-01 13:47:45.976132 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-01 13:47:45.976138 | orchestrator | Saturday 01 November 2025 13:47:45 +0000 (0:00:00.047) 0:06:56.567 ***** 2025-11-01 13:47:45.976145 | orchestrator | 2025-11-01 13:47:45.976163 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-01 13:47:45.976170 | orchestrator | Saturday 01 November 2025 13:47:45 +0000 (0:00:00.048) 0:06:56.616 ***** 2025-11-01 13:47:45.976176 | orchestrator | 2025-11-01 13:47:45.976187 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-01 13:48:13.370701 | orchestrator | Saturday 01 November 2025 13:47:45 +0000 (0:00:00.040) 0:06:56.656 ***** 2025-11-01 13:48:13.370815 | orchestrator | 2025-11-01 13:48:13.370832 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-01 13:48:13.370844 | orchestrator | Saturday 01 November 2025 13:47:45 +0000 (0:00:00.046) 0:06:56.703 ***** 2025-11-01 13:48:13.370855 | orchestrator | 2025-11-01 13:48:13.370867 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-01 13:48:13.370877 | orchestrator | Saturday 01 November 2025 13:47:45 +0000 (0:00:00.047) 0:06:56.751 ***** 2025-11-01 13:48:13.370888 | orchestrator | 2025-11-01 13:48:13.370899 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-11-01 13:48:13.370909 | orchestrator | Saturday 01 November 2025 13:47:45 +0000 (0:00:00.041) 0:06:56.792 ***** 2025-11-01 13:48:13.370920 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:48:13.370932 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:48:13.370942 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:48:13.370953 | orchestrator | 2025-11-01 13:48:13.370964 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-11-01 13:48:13.370974 | orchestrator | Saturday 01 November 2025 13:47:47 +0000 (0:00:01.369) 0:06:58.161 ***** 2025-11-01 13:48:13.370985 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:48:13.370997 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:48:13.371007 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:48:13.371017 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:48:13.371028 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:48:13.371038 | orchestrator | changed: [testbed-manager] 2025-11-01 13:48:13.371049 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:48:13.371059 | orchestrator | 2025-11-01 13:48:13.371070 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2025-11-01 13:48:13.371107 | orchestrator | Saturday 01 November 2025 13:47:48 +0000 (0:00:01.603) 0:06:59.765 ***** 2025-11-01 13:48:13.371118 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:48:13.371128 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:48:13.371139 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:48:13.371149 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:48:13.371159 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:48:13.371170 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:48:13.371180 | orchestrator | changed: [testbed-manager] 2025-11-01 13:48:13.371190 | orchestrator | 2025-11-01 13:48:13.371201 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-11-01 13:48:13.371212 | orchestrator | Saturday 01 November 2025 13:47:50 +0000 (0:00:01.312) 0:07:01.078 ***** 2025-11-01 13:48:13.371222 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:48:13.371233 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:48:13.371246 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:48:13.371257 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:48:13.371270 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:48:13.371281 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:48:13.371293 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:48:13.371305 | orchestrator | 2025-11-01 13:48:13.371318 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-11-01 13:48:13.371331 | orchestrator | Saturday 01 November 2025 13:47:52 +0000 (0:00:02.240) 0:07:03.318 ***** 2025-11-01 13:48:13.371343 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:48:13.371384 | orchestrator | 2025-11-01 13:48:13.371396 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-11-01 13:48:13.371408 | orchestrator | Saturday 01 November 2025 13:47:52 +0000 (0:00:00.091) 0:07:03.409 ***** 2025-11-01 13:48:13.371420 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:48:13.371432 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:48:13.371443 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:48:13.371455 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:48:13.371467 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:48:13.371479 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:48:13.371490 | orchestrator | ok: [testbed-manager] 2025-11-01 13:48:13.371502 | orchestrator | 2025-11-01 13:48:13.371514 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-11-01 13:48:13.371527 | orchestrator | Saturday 01 November 2025 13:47:53 +0000 (0:00:00.987) 0:07:04.396 ***** 2025-11-01 13:48:13.371539 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:48:13.371551 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:48:13.371562 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:48:13.371575 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:48:13.371588 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:48:13.371599 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:48:13.371609 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:48:13.371620 | orchestrator | 2025-11-01 13:48:13.371630 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-11-01 13:48:13.371641 | orchestrator | Saturday 01 November 2025 13:47:54 +0000 (0:00:00.754) 0:07:05.151 ***** 2025-11-01 13:48:13.371653 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 13:48:13.371665 | orchestrator | 2025-11-01 13:48:13.371676 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-11-01 13:48:13.371687 | orchestrator | Saturday 01 November 2025 13:47:55 +0000 (0:00:01.003) 0:07:06.155 ***** 2025-11-01 13:48:13.371697 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:48:13.371708 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:48:13.371719 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:48:13.371730 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:48:13.371740 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:48:13.371757 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:48:13.371767 | orchestrator | ok: [testbed-manager] 2025-11-01 13:48:13.371778 | orchestrator | 2025-11-01 13:48:13.371789 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-11-01 13:48:13.371813 | orchestrator | Saturday 01 November 2025 13:47:56 +0000 (0:00:00.862) 0:07:07.018 ***** 2025-11-01 13:48:13.371825 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-11-01 13:48:13.371836 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-11-01 13:48:13.371864 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-11-01 13:48:13.371876 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-11-01 13:48:13.371886 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-11-01 13:48:13.371897 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-11-01 13:48:13.371907 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-11-01 13:48:13.371918 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-11-01 13:48:13.371929 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-11-01 13:48:13.371939 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-11-01 13:48:13.371950 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-11-01 13:48:13.371960 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-11-01 13:48:13.371970 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-11-01 13:48:13.371981 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-11-01 13:48:13.371991 | orchestrator | 2025-11-01 13:48:13.372002 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-11-01 13:48:13.372013 | orchestrator | Saturday 01 November 2025 13:47:58 +0000 (0:00:02.794) 0:07:09.812 ***** 2025-11-01 13:48:13.372023 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:48:13.372033 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:48:13.372044 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:48:13.372054 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:48:13.372065 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:48:13.372075 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:48:13.372086 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:48:13.372096 | orchestrator | 2025-11-01 13:48:13.372107 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-11-01 13:48:13.372117 | orchestrator | Saturday 01 November 2025 13:47:59 +0000 (0:00:00.583) 0:07:10.395 ***** 2025-11-01 13:48:13.372130 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 13:48:13.372142 | orchestrator | 2025-11-01 13:48:13.372153 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-11-01 13:48:13.372164 | orchestrator | Saturday 01 November 2025 13:48:00 +0000 (0:00:00.945) 0:07:11.341 ***** 2025-11-01 13:48:13.372174 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:48:13.372185 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:48:13.372195 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:48:13.372206 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:48:13.372216 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:48:13.372227 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:48:13.372237 | orchestrator | ok: [testbed-manager] 2025-11-01 13:48:13.372248 | orchestrator | 2025-11-01 13:48:13.372258 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-11-01 13:48:13.372269 | orchestrator | Saturday 01 November 2025 13:48:01 +0000 (0:00:00.938) 0:07:12.279 ***** 2025-11-01 13:48:13.372280 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:48:13.372290 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:48:13.372301 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:48:13.372311 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:48:13.372330 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:48:13.372341 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:48:13.372368 | orchestrator | ok: [testbed-manager] 2025-11-01 13:48:13.372379 | orchestrator | 2025-11-01 13:48:13.372390 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-11-01 13:48:13.372400 | orchestrator | Saturday 01 November 2025 13:48:02 +0000 (0:00:01.144) 0:07:13.423 ***** 2025-11-01 13:48:13.372411 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:48:13.372421 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:48:13.372432 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:48:13.372442 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:48:13.372452 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:48:13.372463 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:48:13.372473 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:48:13.372483 | orchestrator | 2025-11-01 13:48:13.372494 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-11-01 13:48:13.372505 | orchestrator | Saturday 01 November 2025 13:48:03 +0000 (0:00:00.613) 0:07:14.037 ***** 2025-11-01 13:48:13.372515 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:48:13.372526 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:48:13.372536 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:48:13.372547 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:48:13.372557 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:48:13.372567 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:48:13.372578 | orchestrator | ok: [testbed-manager] 2025-11-01 13:48:13.372588 | orchestrator | 2025-11-01 13:48:13.372599 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-11-01 13:48:13.372609 | orchestrator | Saturday 01 November 2025 13:48:04 +0000 (0:00:01.572) 0:07:15.610 ***** 2025-11-01 13:48:13.372620 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:48:13.372630 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:48:13.372641 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:48:13.372651 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:48:13.372662 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:48:13.372672 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:48:13.372683 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:48:13.372693 | orchestrator | 2025-11-01 13:48:13.372704 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-11-01 13:48:13.372715 | orchestrator | Saturday 01 November 2025 13:48:05 +0000 (0:00:00.581) 0:07:16.192 ***** 2025-11-01 13:48:13.372725 | orchestrator | ok: [testbed-manager] 2025-11-01 13:48:13.372735 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:48:13.372746 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:48:13.372756 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:48:13.372772 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:48:13.372783 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:48:13.372793 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:48:13.372804 | orchestrator | 2025-11-01 13:48:13.372821 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-11-01 13:48:47.555751 | orchestrator | Saturday 01 November 2025 13:48:13 +0000 (0:00:07.996) 0:07:24.188 ***** 2025-11-01 13:48:47.555832 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:48:47.555841 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:48:47.555847 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:48:47.555853 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:48:47.555859 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:48:47.555865 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:48:47.555871 | orchestrator | ok: [testbed-manager] 2025-11-01 13:48:47.555877 | orchestrator | 2025-11-01 13:48:47.555884 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-11-01 13:48:47.555890 | orchestrator | Saturday 01 November 2025 13:48:14 +0000 (0:00:01.326) 0:07:25.515 ***** 2025-11-01 13:48:47.555896 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:48:47.555902 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:48:47.555924 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:48:47.555930 | orchestrator | ok: [testbed-manager] 2025-11-01 13:48:47.555936 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:48:47.555941 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:48:47.555947 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:48:47.555953 | orchestrator | 2025-11-01 13:48:47.555958 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-11-01 13:48:47.555964 | orchestrator | Saturday 01 November 2025 13:48:16 +0000 (0:00:01.706) 0:07:27.222 ***** 2025-11-01 13:48:47.555970 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:48:47.555975 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:48:47.555981 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:48:47.555986 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:48:47.555992 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:48:47.555997 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:48:47.556003 | orchestrator | ok: [testbed-manager] 2025-11-01 13:48:47.556009 | orchestrator | 2025-11-01 13:48:47.556014 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-11-01 13:48:47.556020 | orchestrator | Saturday 01 November 2025 13:48:18 +0000 (0:00:01.743) 0:07:28.965 ***** 2025-11-01 13:48:47.556026 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:48:47.556031 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:48:47.556037 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:48:47.556042 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:48:47.556048 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:48:47.556054 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:48:47.556059 | orchestrator | ok: [testbed-manager] 2025-11-01 13:48:47.556065 | orchestrator | 2025-11-01 13:48:47.556071 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-11-01 13:48:47.556077 | orchestrator | Saturday 01 November 2025 13:48:19 +0000 (0:00:01.129) 0:07:30.095 ***** 2025-11-01 13:48:47.556082 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:48:47.556088 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:48:47.556093 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:48:47.556099 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:48:47.556104 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:48:47.556110 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:48:47.556116 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:48:47.556121 | orchestrator | 2025-11-01 13:48:47.556127 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-11-01 13:48:47.556133 | orchestrator | Saturday 01 November 2025 13:48:20 +0000 (0:00:00.864) 0:07:30.959 ***** 2025-11-01 13:48:47.556138 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:48:47.556144 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:48:47.556149 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:48:47.556155 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:48:47.556160 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:48:47.556166 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:48:47.556171 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:48:47.556177 | orchestrator | 2025-11-01 13:48:47.556183 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-11-01 13:48:47.556188 | orchestrator | Saturday 01 November 2025 13:48:20 +0000 (0:00:00.578) 0:07:31.538 ***** 2025-11-01 13:48:47.556194 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:48:47.556199 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:48:47.556205 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:48:47.556211 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:48:47.556216 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:48:47.556222 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:48:47.556227 | orchestrator | ok: [testbed-manager] 2025-11-01 13:48:47.556233 | orchestrator | 2025-11-01 13:48:47.556239 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-11-01 13:48:47.556245 | orchestrator | Saturday 01 November 2025 13:48:21 +0000 (0:00:00.544) 0:07:32.083 ***** 2025-11-01 13:48:47.556258 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:48:47.556264 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:48:47.556269 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:48:47.556275 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:48:47.556280 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:48:47.556286 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:48:47.556292 | orchestrator | ok: [testbed-manager] 2025-11-01 13:48:47.556297 | orchestrator | 2025-11-01 13:48:47.556303 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-11-01 13:48:47.556309 | orchestrator | Saturday 01 November 2025 13:48:22 +0000 (0:00:00.776) 0:07:32.859 ***** 2025-11-01 13:48:47.556314 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:48:47.556320 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:48:47.556326 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:48:47.556331 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:48:47.556337 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:48:47.556342 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:48:47.556373 | orchestrator | ok: [testbed-manager] 2025-11-01 13:48:47.556379 | orchestrator | 2025-11-01 13:48:47.556385 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-11-01 13:48:47.556391 | orchestrator | Saturday 01 November 2025 13:48:22 +0000 (0:00:00.554) 0:07:33.414 ***** 2025-11-01 13:48:47.556397 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:48:47.556402 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:48:47.556408 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:48:47.556414 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:48:47.556420 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:48:47.556425 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:48:47.556431 | orchestrator | ok: [testbed-manager] 2025-11-01 13:48:47.556437 | orchestrator | 2025-11-01 13:48:47.556443 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-11-01 13:48:47.556459 | orchestrator | Saturday 01 November 2025 13:48:27 +0000 (0:00:05.379) 0:07:38.794 ***** 2025-11-01 13:48:47.556465 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:48:47.556471 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:48:47.556476 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:48:47.556482 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:48:47.556488 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:48:47.556507 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:48:47.556513 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:48:47.556519 | orchestrator | 2025-11-01 13:48:47.556525 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-11-01 13:48:47.556530 | orchestrator | Saturday 01 November 2025 13:48:28 +0000 (0:00:00.582) 0:07:39.376 ***** 2025-11-01 13:48:47.556538 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 13:48:47.556546 | orchestrator | 2025-11-01 13:48:47.556551 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-11-01 13:48:47.556557 | orchestrator | Saturday 01 November 2025 13:48:29 +0000 (0:00:01.087) 0:07:40.463 ***** 2025-11-01 13:48:47.556563 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:48:47.556568 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:48:47.556574 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:48:47.556580 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:48:47.556585 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:48:47.556591 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:48:47.556596 | orchestrator | ok: [testbed-manager] 2025-11-01 13:48:47.556602 | orchestrator | 2025-11-01 13:48:47.556608 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-11-01 13:48:47.556614 | orchestrator | Saturday 01 November 2025 13:48:31 +0000 (0:00:02.258) 0:07:42.722 ***** 2025-11-01 13:48:47.556619 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:48:47.556625 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:48:47.556635 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:48:47.556641 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:48:47.556647 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:48:47.556652 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:48:47.556658 | orchestrator | ok: [testbed-manager] 2025-11-01 13:48:47.556663 | orchestrator | 2025-11-01 13:48:47.556669 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-11-01 13:48:47.556675 | orchestrator | Saturday 01 November 2025 13:48:33 +0000 (0:00:01.248) 0:07:43.971 ***** 2025-11-01 13:48:47.556681 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:48:47.556686 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:48:47.556692 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:48:47.556698 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:48:47.556703 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:48:47.556709 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:48:47.556714 | orchestrator | ok: [testbed-manager] 2025-11-01 13:48:47.556720 | orchestrator | 2025-11-01 13:48:47.556726 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-11-01 13:48:47.556732 | orchestrator | Saturday 01 November 2025 13:48:34 +0000 (0:00:00.913) 0:07:44.884 ***** 2025-11-01 13:48:47.556738 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-01 13:48:47.556745 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-01 13:48:47.556750 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-01 13:48:47.556756 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-01 13:48:47.556762 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-01 13:48:47.556767 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-01 13:48:47.556773 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-01 13:48:47.556779 | orchestrator | 2025-11-01 13:48:47.556785 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-11-01 13:48:47.556790 | orchestrator | Saturday 01 November 2025 13:48:36 +0000 (0:00:02.047) 0:07:46.932 ***** 2025-11-01 13:48:47.556796 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 13:48:47.556802 | orchestrator | 2025-11-01 13:48:47.556808 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-11-01 13:48:47.556814 | orchestrator | Saturday 01 November 2025 13:48:36 +0000 (0:00:00.887) 0:07:47.819 ***** 2025-11-01 13:48:47.556819 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:48:47.556825 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:48:47.556831 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:48:47.556836 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:48:47.556842 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:48:47.556848 | orchestrator | changed: [testbed-manager] 2025-11-01 13:48:47.556857 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:48:47.556862 | orchestrator | 2025-11-01 13:48:47.556868 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-11-01 13:48:47.556877 | orchestrator | Saturday 01 November 2025 13:48:47 +0000 (0:00:10.552) 0:07:58.372 ***** 2025-11-01 13:49:20.654792 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:49:20.654900 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:49:20.654940 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:49:20.654951 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:49:20.654962 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:49:20.654972 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:49:20.654983 | orchestrator | ok: [testbed-manager] 2025-11-01 13:49:20.654994 | orchestrator | 2025-11-01 13:49:20.655007 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-11-01 13:49:20.655020 | orchestrator | Saturday 01 November 2025 13:48:49 +0000 (0:00:02.111) 0:08:00.483 ***** 2025-11-01 13:49:20.655030 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:49:20.655041 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:49:20.655051 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:49:20.655062 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:49:20.655072 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:49:20.655082 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:49:20.655092 | orchestrator | 2025-11-01 13:49:20.655103 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-11-01 13:49:20.655114 | orchestrator | Saturday 01 November 2025 13:48:51 +0000 (0:00:01.422) 0:08:01.905 ***** 2025-11-01 13:49:20.655124 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:49:20.655135 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:49:20.655146 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:49:20.655156 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:49:20.655169 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:49:20.655188 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:49:20.655206 | orchestrator | changed: [testbed-manager] 2025-11-01 13:49:20.655224 | orchestrator | 2025-11-01 13:49:20.655241 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-11-01 13:49:20.655258 | orchestrator | 2025-11-01 13:49:20.655276 | orchestrator | TASK [Include hardening role] ************************************************** 2025-11-01 13:49:20.655295 | orchestrator | Saturday 01 November 2025 13:48:52 +0000 (0:00:01.636) 0:08:03.541 ***** 2025-11-01 13:49:20.655314 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:49:20.655332 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:49:20.655381 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:49:20.655401 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:49:20.655420 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:49:20.655438 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:49:20.655457 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:49:20.655476 | orchestrator | 2025-11-01 13:49:20.655497 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-11-01 13:49:20.655516 | orchestrator | 2025-11-01 13:49:20.655535 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-11-01 13:49:20.655554 | orchestrator | Saturday 01 November 2025 13:48:53 +0000 (0:00:00.549) 0:08:04.091 ***** 2025-11-01 13:49:20.655572 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:49:20.655590 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:49:20.655607 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:49:20.655626 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:49:20.655643 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:49:20.655660 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:49:20.655678 | orchestrator | changed: [testbed-manager] 2025-11-01 13:49:20.655690 | orchestrator | 2025-11-01 13:49:20.655701 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-11-01 13:49:20.655711 | orchestrator | Saturday 01 November 2025 13:48:54 +0000 (0:00:01.335) 0:08:05.427 ***** 2025-11-01 13:49:20.655722 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:49:20.655732 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:49:20.655743 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:49:20.655753 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:49:20.655763 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:49:20.655774 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:49:20.655784 | orchestrator | ok: [testbed-manager] 2025-11-01 13:49:20.655794 | orchestrator | 2025-11-01 13:49:20.655805 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-11-01 13:49:20.655827 | orchestrator | Saturday 01 November 2025 13:48:56 +0000 (0:00:01.600) 0:08:07.027 ***** 2025-11-01 13:49:20.655837 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:49:20.655848 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:49:20.655858 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:49:20.655868 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:49:20.655879 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:49:20.655890 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:49:20.655900 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:49:20.655911 | orchestrator | 2025-11-01 13:49:20.655921 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-11-01 13:49:20.655932 | orchestrator | Saturday 01 November 2025 13:48:56 +0000 (0:00:00.718) 0:08:07.746 ***** 2025-11-01 13:49:20.655943 | orchestrator | included: osism.services.smartd for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 13:49:20.655954 | orchestrator | 2025-11-01 13:49:20.655964 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-11-01 13:49:20.655975 | orchestrator | Saturday 01 November 2025 13:48:57 +0000 (0:00:00.926) 0:08:08.673 ***** 2025-11-01 13:49:20.655987 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 13:49:20.656000 | orchestrator | 2025-11-01 13:49:20.656011 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-11-01 13:49:20.656021 | orchestrator | Saturday 01 November 2025 13:48:58 +0000 (0:00:00.826) 0:08:09.500 ***** 2025-11-01 13:49:20.656032 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:49:20.656042 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:49:20.656052 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:49:20.656063 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:49:20.656088 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:49:20.656099 | orchestrator | changed: [testbed-manager] 2025-11-01 13:49:20.656110 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:49:20.656120 | orchestrator | 2025-11-01 13:49:20.656131 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-11-01 13:49:20.656161 | orchestrator | Saturday 01 November 2025 13:49:08 +0000 (0:00:10.012) 0:08:19.512 ***** 2025-11-01 13:49:20.656173 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:49:20.656184 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:49:20.656194 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:49:20.656204 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:49:20.656215 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:49:20.656225 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:49:20.656236 | orchestrator | changed: [testbed-manager] 2025-11-01 13:49:20.656246 | orchestrator | 2025-11-01 13:49:20.656257 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-11-01 13:49:20.656267 | orchestrator | Saturday 01 November 2025 13:49:09 +0000 (0:00:00.842) 0:08:20.355 ***** 2025-11-01 13:49:20.656278 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:49:20.656288 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:49:20.656298 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:49:20.656309 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:49:20.656319 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:49:20.656330 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:49:20.656340 | orchestrator | changed: [testbed-manager] 2025-11-01 13:49:20.656370 | orchestrator | 2025-11-01 13:49:20.656381 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-11-01 13:49:20.656392 | orchestrator | Saturday 01 November 2025 13:49:10 +0000 (0:00:01.357) 0:08:21.712 ***** 2025-11-01 13:49:20.656403 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:49:20.656421 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:49:20.656431 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:49:20.656442 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:49:20.656452 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:49:20.656462 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:49:20.656473 | orchestrator | changed: [testbed-manager] 2025-11-01 13:49:20.656483 | orchestrator | 2025-11-01 13:49:20.656494 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-11-01 13:49:20.656505 | orchestrator | Saturday 01 November 2025 13:49:13 +0000 (0:00:02.157) 0:08:23.870 ***** 2025-11-01 13:49:20.656515 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:49:20.656526 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:49:20.656536 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:49:20.656547 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:49:20.656557 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:49:20.656567 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:49:20.656578 | orchestrator | changed: [testbed-manager] 2025-11-01 13:49:20.656588 | orchestrator | 2025-11-01 13:49:20.656599 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-11-01 13:49:20.656609 | orchestrator | Saturday 01 November 2025 13:49:14 +0000 (0:00:01.343) 0:08:25.213 ***** 2025-11-01 13:49:20.656620 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:49:20.656630 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:49:20.656640 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:49:20.656651 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:49:20.656661 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:49:20.656671 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:49:20.656682 | orchestrator | changed: [testbed-manager] 2025-11-01 13:49:20.656692 | orchestrator | 2025-11-01 13:49:20.656703 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-11-01 13:49:20.656713 | orchestrator | 2025-11-01 13:49:20.656724 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-11-01 13:49:20.656735 | orchestrator | Saturday 01 November 2025 13:49:15 +0000 (0:00:01.174) 0:08:26.388 ***** 2025-11-01 13:49:20.656746 | orchestrator | included: osism.commons.state for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 13:49:20.656756 | orchestrator | 2025-11-01 13:49:20.656767 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-11-01 13:49:20.656778 | orchestrator | Saturday 01 November 2025 13:49:16 +0000 (0:00:01.076) 0:08:27.465 ***** 2025-11-01 13:49:20.656788 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:49:20.656799 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:49:20.656809 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:49:20.656820 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:49:20.656830 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:49:20.656840 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:49:20.656851 | orchestrator | ok: [testbed-manager] 2025-11-01 13:49:20.656862 | orchestrator | 2025-11-01 13:49:20.656872 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-11-01 13:49:20.656883 | orchestrator | Saturday 01 November 2025 13:49:17 +0000 (0:00:00.849) 0:08:28.314 ***** 2025-11-01 13:49:20.656894 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:49:20.656904 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:49:20.656915 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:49:20.656925 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:49:20.656936 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:49:20.656946 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:49:20.656956 | orchestrator | changed: [testbed-manager] 2025-11-01 13:49:20.656967 | orchestrator | 2025-11-01 13:49:20.656977 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-11-01 13:49:20.656988 | orchestrator | Saturday 01 November 2025 13:49:18 +0000 (0:00:01.191) 0:08:29.506 ***** 2025-11-01 13:49:20.656999 | orchestrator | included: osism.commons.state for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 13:49:20.657016 | orchestrator | 2025-11-01 13:49:20.657027 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-11-01 13:49:20.657038 | orchestrator | Saturday 01 November 2025 13:49:19 +0000 (0:00:01.060) 0:08:30.566 ***** 2025-11-01 13:49:20.657048 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:49:20.657059 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:49:20.657069 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:49:20.657079 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:49:20.657090 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:49:20.657100 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:49:20.657116 | orchestrator | ok: [testbed-manager] 2025-11-01 13:49:20.657127 | orchestrator | 2025-11-01 13:49:20.657138 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-11-01 13:49:20.657155 | orchestrator | Saturday 01 November 2025 13:49:20 +0000 (0:00:00.903) 0:08:31.469 ***** 2025-11-01 13:49:22.404951 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:49:22.405045 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:49:22.405059 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:49:22.405070 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:49:22.405080 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:49:22.405091 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:49:22.405102 | orchestrator | changed: [testbed-manager] 2025-11-01 13:49:22.405113 | orchestrator | 2025-11-01 13:49:22.405125 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:49:22.405138 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2025-11-01 13:49:22.405150 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-11-01 13:49:22.405160 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-11-01 13:49:22.405171 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-11-01 13:49:22.405182 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-11-01 13:49:22.405192 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-11-01 13:49:22.405203 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-11-01 13:49:22.405213 | orchestrator | 2025-11-01 13:49:22.405225 | orchestrator | 2025-11-01 13:49:22.405236 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:49:22.405246 | orchestrator | Saturday 01 November 2025 13:49:21 +0000 (0:00:01.187) 0:08:32.657 ***** 2025-11-01 13:49:22.405257 | orchestrator | =============================================================================== 2025-11-01 13:49:22.405268 | orchestrator | osism.commons.packages : Install required packages --------------------- 71.52s 2025-11-01 13:49:22.405278 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.38s 2025-11-01 13:49:22.405289 | orchestrator | osism.commons.packages : Download required packages -------------------- 31.05s 2025-11-01 13:49:22.405299 | orchestrator | osism.commons.repository : Update package cache ------------------------ 22.37s 2025-11-01 13:49:22.405310 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 14.44s 2025-11-01 13:49:22.405320 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.59s 2025-11-01 13:49:22.405332 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.27s 2025-11-01 13:49:22.405413 | orchestrator | osism.services.docker : Install containerd package --------------------- 11.09s 2025-11-01 13:49:22.405425 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 10.55s 2025-11-01 13:49:22.405436 | orchestrator | osism.services.smartd : Install smartmontools package ------------------ 10.01s 2025-11-01 13:49:22.405446 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.96s 2025-11-01 13:49:22.405457 | orchestrator | osism.services.rng : Install rng package -------------------------------- 9.89s 2025-11-01 13:49:22.405468 | orchestrator | osism.services.docker : Add repository ---------------------------------- 9.53s 2025-11-01 13:49:22.405478 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 9.25s 2025-11-01 13:49:22.405489 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.76s 2025-11-01 13:49:22.405501 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.00s 2025-11-01 13:49:22.405513 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.99s 2025-11-01 13:49:22.405524 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 7.78s 2025-11-01 13:49:22.405536 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 7.05s 2025-11-01 13:49:22.405548 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.38s 2025-11-01 13:49:22.765037 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-11-01 13:49:22.765109 | orchestrator | + osism apply network 2025-11-01 13:49:35.807504 | orchestrator | 2025-11-01 13:49:35 | INFO  | Task 94d340fb-607b-4f80-ab4c-a4296bd55f94 (network) was prepared for execution. 2025-11-01 13:49:35.807610 | orchestrator | 2025-11-01 13:49:35 | INFO  | It takes a moment until task 94d340fb-607b-4f80-ab4c-a4296bd55f94 (network) has been started and output is visible here. 2025-11-01 13:50:06.819239 | orchestrator | 2025-11-01 13:50:06.819331 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-11-01 13:50:06.819348 | orchestrator | 2025-11-01 13:50:06.819383 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-11-01 13:50:06.819408 | orchestrator | Saturday 01 November 2025 13:49:40 +0000 (0:00:00.315) 0:00:00.315 ***** 2025-11-01 13:50:06.819420 | orchestrator | ok: [testbed-manager] 2025-11-01 13:50:06.819432 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:50:06.819443 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:50:06.819453 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:50:06.819464 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:50:06.819475 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:50:06.819486 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:50:06.819497 | orchestrator | 2025-11-01 13:50:06.819507 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-11-01 13:50:06.819518 | orchestrator | Saturday 01 November 2025 13:49:41 +0000 (0:00:00.836) 0:00:01.152 ***** 2025-11-01 13:50:06.819531 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:50:06.819544 | orchestrator | 2025-11-01 13:50:06.819555 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-11-01 13:50:06.819565 | orchestrator | Saturday 01 November 2025 13:49:42 +0000 (0:00:01.312) 0:00:02.464 ***** 2025-11-01 13:50:06.819576 | orchestrator | ok: [testbed-manager] 2025-11-01 13:50:06.819587 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:50:06.819597 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:50:06.819608 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:50:06.819619 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:50:06.819629 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:50:06.819640 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:50:06.819651 | orchestrator | 2025-11-01 13:50:06.819661 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-11-01 13:50:06.819691 | orchestrator | Saturday 01 November 2025 13:49:44 +0000 (0:00:02.111) 0:00:04.576 ***** 2025-11-01 13:50:06.819702 | orchestrator | ok: [testbed-manager] 2025-11-01 13:50:06.819713 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:50:06.819723 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:50:06.819734 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:50:06.819745 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:50:06.819755 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:50:06.819765 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:50:06.819776 | orchestrator | 2025-11-01 13:50:06.819787 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-11-01 13:50:06.819798 | orchestrator | Saturday 01 November 2025 13:49:46 +0000 (0:00:01.807) 0:00:06.384 ***** 2025-11-01 13:50:06.819811 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-11-01 13:50:06.819823 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-11-01 13:50:06.819836 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-11-01 13:50:06.819848 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-11-01 13:50:06.819860 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-11-01 13:50:06.819873 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-11-01 13:50:06.819885 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-11-01 13:50:06.819897 | orchestrator | 2025-11-01 13:50:06.819910 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-11-01 13:50:06.819923 | orchestrator | Saturday 01 November 2025 13:49:47 +0000 (0:00:01.024) 0:00:07.408 ***** 2025-11-01 13:50:06.819935 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-11-01 13:50:06.819947 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-11-01 13:50:06.819960 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-01 13:50:06.819972 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-01 13:50:06.819984 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-11-01 13:50:06.819996 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-11-01 13:50:06.820008 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-11-01 13:50:06.820020 | orchestrator | 2025-11-01 13:50:06.820032 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-11-01 13:50:06.820045 | orchestrator | Saturday 01 November 2025 13:49:51 +0000 (0:00:03.775) 0:00:11.183 ***** 2025-11-01 13:50:06.820058 | orchestrator | changed: [testbed-manager] 2025-11-01 13:50:06.820070 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:50:06.820082 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:50:06.820094 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:50:06.820106 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:50:06.820119 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:50:06.820131 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:50:06.820144 | orchestrator | 2025-11-01 13:50:06.820156 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-11-01 13:50:06.820167 | orchestrator | Saturday 01 November 2025 13:49:53 +0000 (0:00:01.721) 0:00:12.905 ***** 2025-11-01 13:50:06.820177 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-01 13:50:06.820188 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-01 13:50:06.820198 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-11-01 13:50:06.820209 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-11-01 13:50:06.820219 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-11-01 13:50:06.820230 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-11-01 13:50:06.820240 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-11-01 13:50:06.820251 | orchestrator | 2025-11-01 13:50:06.820261 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-11-01 13:50:06.820272 | orchestrator | Saturday 01 November 2025 13:49:55 +0000 (0:00:01.875) 0:00:14.780 ***** 2025-11-01 13:50:06.820283 | orchestrator | ok: [testbed-manager] 2025-11-01 13:50:06.820293 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:50:06.820304 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:50:06.820321 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:50:06.820332 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:50:06.820342 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:50:06.820370 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:50:06.820382 | orchestrator | 2025-11-01 13:50:06.820393 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-11-01 13:50:06.820417 | orchestrator | Saturday 01 November 2025 13:49:56 +0000 (0:00:01.171) 0:00:15.952 ***** 2025-11-01 13:50:06.820429 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:50:06.820439 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:50:06.820450 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:50:06.820461 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:50:06.820471 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:50:06.820482 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:50:06.820492 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:50:06.820503 | orchestrator | 2025-11-01 13:50:06.820514 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-11-01 13:50:06.820525 | orchestrator | Saturday 01 November 2025 13:49:57 +0000 (0:00:00.691) 0:00:16.643 ***** 2025-11-01 13:50:06.820535 | orchestrator | ok: [testbed-manager] 2025-11-01 13:50:06.820546 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:50:06.820557 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:50:06.820567 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:50:06.820577 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:50:06.820588 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:50:06.820598 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:50:06.820609 | orchestrator | 2025-11-01 13:50:06.820619 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-11-01 13:50:06.820630 | orchestrator | Saturday 01 November 2025 13:49:59 +0000 (0:00:02.309) 0:00:18.953 ***** 2025-11-01 13:50:06.820641 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:50:06.820651 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:50:06.820662 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:50:06.820673 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:50:06.820683 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:50:06.820693 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:50:06.820704 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-11-01 13:50:06.820716 | orchestrator | 2025-11-01 13:50:06.820727 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-11-01 13:50:06.820737 | orchestrator | Saturday 01 November 2025 13:50:00 +0000 (0:00:00.979) 0:00:19.932 ***** 2025-11-01 13:50:06.820748 | orchestrator | ok: [testbed-manager] 2025-11-01 13:50:06.820758 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:50:06.820769 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:50:06.820779 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:50:06.820789 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:50:06.820800 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:50:06.820810 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:50:06.820821 | orchestrator | 2025-11-01 13:50:06.820831 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-11-01 13:50:06.820842 | orchestrator | Saturday 01 November 2025 13:50:02 +0000 (0:00:01.806) 0:00:21.739 ***** 2025-11-01 13:50:06.820853 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:50:06.820865 | orchestrator | 2025-11-01 13:50:06.820876 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-11-01 13:50:06.820886 | orchestrator | Saturday 01 November 2025 13:50:03 +0000 (0:00:01.365) 0:00:23.105 ***** 2025-11-01 13:50:06.820897 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:50:06.820908 | orchestrator | ok: [testbed-manager] 2025-11-01 13:50:06.820918 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:50:06.820935 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:50:06.820946 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:50:06.820956 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:50:06.820967 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:50:06.820977 | orchestrator | 2025-11-01 13:50:06.820988 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-11-01 13:50:06.820999 | orchestrator | Saturday 01 November 2025 13:50:04 +0000 (0:00:00.998) 0:00:24.104 ***** 2025-11-01 13:50:06.821009 | orchestrator | ok: [testbed-manager] 2025-11-01 13:50:06.821020 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:50:06.821030 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:50:06.821041 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:50:06.821051 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:50:06.821062 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:50:06.821072 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:50:06.821082 | orchestrator | 2025-11-01 13:50:06.821093 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-11-01 13:50:06.821104 | orchestrator | Saturday 01 November 2025 13:50:05 +0000 (0:00:00.929) 0:00:25.034 ***** 2025-11-01 13:50:06.821114 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-11-01 13:50:06.821125 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-11-01 13:50:06.821136 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-11-01 13:50:06.821146 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-11-01 13:50:06.821157 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-01 13:50:06.821167 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-11-01 13:50:06.821178 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-01 13:50:06.821188 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-11-01 13:50:06.821199 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-01 13:50:06.821209 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-01 13:50:06.821226 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-01 13:50:06.821238 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-11-01 13:50:06.821248 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-01 13:50:06.821259 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-01 13:50:06.821270 | orchestrator | 2025-11-01 13:50:06.821286 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-11-01 13:50:25.083633 | orchestrator | Saturday 01 November 2025 13:50:06 +0000 (0:00:01.378) 0:00:26.412 ***** 2025-11-01 13:50:25.083743 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:50:25.083775 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:50:25.083786 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:50:25.083796 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:50:25.083806 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:50:25.083816 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:50:25.083825 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:50:25.083835 | orchestrator | 2025-11-01 13:50:25.083847 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-11-01 13:50:25.083858 | orchestrator | Saturday 01 November 2025 13:50:07 +0000 (0:00:00.661) 0:00:27.074 ***** 2025-11-01 13:50:25.083870 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-1, testbed-node-0, testbed-node-3, testbed-node-4, testbed-node-2, testbed-node-5 2025-11-01 13:50:25.083882 | orchestrator | 2025-11-01 13:50:25.083892 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-11-01 13:50:25.083901 | orchestrator | Saturday 01 November 2025 13:50:12 +0000 (0:00:04.979) 0:00:32.054 ***** 2025-11-01 13:50:25.083935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-11-01 13:50:25.083946 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-11-01 13:50:25.083957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-11-01 13:50:25.083967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-11-01 13:50:25.083977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-11-01 13:50:25.083987 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-11-01 13:50:25.083997 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-11-01 13:50:25.084006 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-11-01 13:50:25.084016 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-11-01 13:50:25.084026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-11-01 13:50:25.084043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-11-01 13:50:25.084068 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-11-01 13:50:25.084084 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-11-01 13:50:25.084094 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-11-01 13:50:25.084110 | orchestrator | 2025-11-01 13:50:25.084120 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-11-01 13:50:25.084130 | orchestrator | Saturday 01 November 2025 13:50:18 +0000 (0:00:06.284) 0:00:38.338 ***** 2025-11-01 13:50:25.084140 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-11-01 13:50:25.084150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-11-01 13:50:25.084160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-11-01 13:50:25.084172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-11-01 13:50:25.084183 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-11-01 13:50:25.084195 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-11-01 13:50:25.084207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-11-01 13:50:25.084218 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-11-01 13:50:25.084229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-11-01 13:50:25.084240 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-11-01 13:50:25.084251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-11-01 13:50:25.084262 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-11-01 13:50:25.084284 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-11-01 13:50:32.510315 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-11-01 13:50:32.510454 | orchestrator | 2025-11-01 13:50:32.510472 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-11-01 13:50:32.510484 | orchestrator | Saturday 01 November 2025 13:50:25 +0000 (0:00:06.332) 0:00:44.671 ***** 2025-11-01 13:50:32.510497 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:50:32.510509 | orchestrator | 2025-11-01 13:50:32.510520 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-11-01 13:50:32.510531 | orchestrator | Saturday 01 November 2025 13:50:26 +0000 (0:00:01.355) 0:00:46.026 ***** 2025-11-01 13:50:32.510542 | orchestrator | ok: [testbed-manager] 2025-11-01 13:50:32.510553 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:50:32.510564 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:50:32.510574 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:50:32.510585 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:50:32.510595 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:50:32.510606 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:50:32.510616 | orchestrator | 2025-11-01 13:50:32.510627 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-11-01 13:50:32.510638 | orchestrator | Saturday 01 November 2025 13:50:28 +0000 (0:00:02.037) 0:00:48.063 ***** 2025-11-01 13:50:32.510649 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-01 13:50:32.510661 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-01 13:50:32.510671 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-01 13:50:32.510682 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-01 13:50:32.510692 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-01 13:50:32.510703 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-01 13:50:32.510714 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-01 13:50:32.510724 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-01 13:50:32.510735 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:50:32.510746 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-01 13:50:32.510757 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-01 13:50:32.510767 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-01 13:50:32.510778 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-01 13:50:32.510788 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:50:32.510799 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-01 13:50:32.510810 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-01 13:50:32.510820 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-01 13:50:32.510831 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-01 13:50:32.510842 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:50:32.510855 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-01 13:50:32.510888 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-01 13:50:32.510901 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-01 13:50:32.510913 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-01 13:50:32.510924 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:50:32.510937 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-01 13:50:32.510949 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-01 13:50:32.510961 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-01 13:50:32.510973 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-01 13:50:32.510985 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:50:32.510997 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:50:32.511010 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-01 13:50:32.511022 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-01 13:50:32.511033 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-01 13:50:32.511045 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-01 13:50:32.511057 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:50:32.511069 | orchestrator | 2025-11-01 13:50:32.511082 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-11-01 13:50:32.511116 | orchestrator | Saturday 01 November 2025 13:50:30 +0000 (0:00:02.149) 0:00:50.213 ***** 2025-11-01 13:50:32.511130 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:50:32.511142 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:50:32.511155 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:50:32.511167 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:50:32.511179 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:50:32.511191 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:50:32.511203 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:50:32.511213 | orchestrator | 2025-11-01 13:50:32.511224 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-11-01 13:50:32.511235 | orchestrator | Saturday 01 November 2025 13:50:31 +0000 (0:00:00.687) 0:00:50.900 ***** 2025-11-01 13:50:32.511245 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:50:32.511256 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:50:32.511266 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:50:32.511276 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:50:32.511287 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:50:32.511297 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:50:32.511308 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:50:32.511318 | orchestrator | 2025-11-01 13:50:32.511329 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:50:32.511341 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-01 13:50:32.511373 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 13:50:32.511385 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 13:50:32.511396 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 13:50:32.511406 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 13:50:32.511425 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 13:50:32.511436 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 13:50:32.511446 | orchestrator | 2025-11-01 13:50:32.511457 | orchestrator | 2025-11-01 13:50:32.511468 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:50:32.511479 | orchestrator | Saturday 01 November 2025 13:50:32 +0000 (0:00:00.759) 0:00:51.660 ***** 2025-11-01 13:50:32.511489 | orchestrator | =============================================================================== 2025-11-01 13:50:32.511500 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.33s 2025-11-01 13:50:32.511511 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.28s 2025-11-01 13:50:32.511521 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.98s 2025-11-01 13:50:32.511532 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.78s 2025-11-01 13:50:32.511542 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.31s 2025-11-01 13:50:32.511553 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.15s 2025-11-01 13:50:32.511564 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.11s 2025-11-01 13:50:32.511574 | orchestrator | osism.commons.network : List existing configuration files --------------- 2.04s 2025-11-01 13:50:32.511585 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.88s 2025-11-01 13:50:32.511595 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.81s 2025-11-01 13:50:32.511605 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.81s 2025-11-01 13:50:32.511616 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.72s 2025-11-01 13:50:32.511627 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.38s 2025-11-01 13:50:32.511637 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.37s 2025-11-01 13:50:32.511648 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.36s 2025-11-01 13:50:32.511658 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.31s 2025-11-01 13:50:32.511669 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.17s 2025-11-01 13:50:32.511680 | orchestrator | osism.commons.network : Create required directories --------------------- 1.02s 2025-11-01 13:50:32.511690 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.00s 2025-11-01 13:50:32.511701 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.98s 2025-11-01 13:50:32.864129 | orchestrator | + osism apply wireguard 2025-11-01 13:50:45.106674 | orchestrator | 2025-11-01 13:50:45 | INFO  | Task 81a4cbb6-3eef-45ca-95c2-cc1d20aa1323 (wireguard) was prepared for execution. 2025-11-01 13:50:45.106763 | orchestrator | 2025-11-01 13:50:45 | INFO  | It takes a moment until task 81a4cbb6-3eef-45ca-95c2-cc1d20aa1323 (wireguard) has been started and output is visible here. 2025-11-01 13:51:06.651927 | orchestrator | 2025-11-01 13:51:06.652019 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-11-01 13:51:06.652033 | orchestrator | 2025-11-01 13:51:06.652045 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-11-01 13:51:06.652056 | orchestrator | Saturday 01 November 2025 13:50:49 +0000 (0:00:00.232) 0:00:00.232 ***** 2025-11-01 13:51:06.652067 | orchestrator | ok: [testbed-manager] 2025-11-01 13:51:06.652079 | orchestrator | 2025-11-01 13:51:06.652089 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-11-01 13:51:06.652100 | orchestrator | Saturday 01 November 2025 13:50:51 +0000 (0:00:01.704) 0:00:01.937 ***** 2025-11-01 13:51:06.652136 | orchestrator | changed: [testbed-manager] 2025-11-01 13:51:06.652148 | orchestrator | 2025-11-01 13:51:06.652159 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-11-01 13:51:06.652170 | orchestrator | Saturday 01 November 2025 13:50:58 +0000 (0:00:07.325) 0:00:09.262 ***** 2025-11-01 13:51:06.652181 | orchestrator | changed: [testbed-manager] 2025-11-01 13:51:06.652191 | orchestrator | 2025-11-01 13:51:06.652202 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-11-01 13:51:06.652212 | orchestrator | Saturday 01 November 2025 13:50:59 +0000 (0:00:00.589) 0:00:09.851 ***** 2025-11-01 13:51:06.652223 | orchestrator | changed: [testbed-manager] 2025-11-01 13:51:06.652233 | orchestrator | 2025-11-01 13:51:06.652244 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-11-01 13:51:06.652254 | orchestrator | Saturday 01 November 2025 13:50:59 +0000 (0:00:00.452) 0:00:10.304 ***** 2025-11-01 13:51:06.652265 | orchestrator | ok: [testbed-manager] 2025-11-01 13:51:06.652276 | orchestrator | 2025-11-01 13:51:06.652287 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-11-01 13:51:06.652297 | orchestrator | Saturday 01 November 2025 13:51:00 +0000 (0:00:00.709) 0:00:11.014 ***** 2025-11-01 13:51:06.652308 | orchestrator | ok: [testbed-manager] 2025-11-01 13:51:06.652318 | orchestrator | 2025-11-01 13:51:06.652329 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-11-01 13:51:06.652339 | orchestrator | Saturday 01 November 2025 13:51:00 +0000 (0:00:00.431) 0:00:11.445 ***** 2025-11-01 13:51:06.652349 | orchestrator | ok: [testbed-manager] 2025-11-01 13:51:06.652389 | orchestrator | 2025-11-01 13:51:06.652400 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-11-01 13:51:06.652410 | orchestrator | Saturday 01 November 2025 13:51:01 +0000 (0:00:00.433) 0:00:11.879 ***** 2025-11-01 13:51:06.652421 | orchestrator | changed: [testbed-manager] 2025-11-01 13:51:06.652431 | orchestrator | 2025-11-01 13:51:06.652442 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-11-01 13:51:06.652452 | orchestrator | Saturday 01 November 2025 13:51:02 +0000 (0:00:01.252) 0:00:13.131 ***** 2025-11-01 13:51:06.652463 | orchestrator | changed: [testbed-manager] => (item=None) 2025-11-01 13:51:06.652474 | orchestrator | changed: [testbed-manager] 2025-11-01 13:51:06.652484 | orchestrator | 2025-11-01 13:51:06.652495 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-11-01 13:51:06.652505 | orchestrator | Saturday 01 November 2025 13:51:03 +0000 (0:00:00.965) 0:00:14.096 ***** 2025-11-01 13:51:06.652516 | orchestrator | changed: [testbed-manager] 2025-11-01 13:51:06.652526 | orchestrator | 2025-11-01 13:51:06.652537 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-11-01 13:51:06.652552 | orchestrator | Saturday 01 November 2025 13:51:05 +0000 (0:00:01.827) 0:00:15.924 ***** 2025-11-01 13:51:06.652570 | orchestrator | changed: [testbed-manager] 2025-11-01 13:51:06.652589 | orchestrator | 2025-11-01 13:51:06.652606 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:51:06.652625 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:51:06.652638 | orchestrator | 2025-11-01 13:51:06.652649 | orchestrator | 2025-11-01 13:51:06.652660 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:51:06.652671 | orchestrator | Saturday 01 November 2025 13:51:06 +0000 (0:00:01.066) 0:00:16.990 ***** 2025-11-01 13:51:06.652681 | orchestrator | =============================================================================== 2025-11-01 13:51:06.652692 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.33s 2025-11-01 13:51:06.652703 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.83s 2025-11-01 13:51:06.652713 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.70s 2025-11-01 13:51:06.652724 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.25s 2025-11-01 13:51:06.652746 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.07s 2025-11-01 13:51:06.652757 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.97s 2025-11-01 13:51:06.652767 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.71s 2025-11-01 13:51:06.652778 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.59s 2025-11-01 13:51:06.652789 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.45s 2025-11-01 13:51:06.652799 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.43s 2025-11-01 13:51:06.652810 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.43s 2025-11-01 13:51:06.989280 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-11-01 13:51:07.030586 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-11-01 13:51:07.030620 | orchestrator | Dload Upload Total Spent Left Speed 2025-11-01 13:51:07.107145 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 195 0 --:--:-- --:--:-- --:--:-- 197 2025-11-01 13:51:07.116742 | orchestrator | + osism apply --environment custom workarounds 2025-11-01 13:51:09.092891 | orchestrator | 2025-11-01 13:51:09 | INFO  | Trying to run play workarounds in environment custom 2025-11-01 13:51:19.256474 | orchestrator | 2025-11-01 13:51:19 | INFO  | Task 1ad5eb56-21ee-4dcb-b40e-1cfb5f01292b (workarounds) was prepared for execution. 2025-11-01 13:51:19.256578 | orchestrator | 2025-11-01 13:51:19 | INFO  | It takes a moment until task 1ad5eb56-21ee-4dcb-b40e-1cfb5f01292b (workarounds) has been started and output is visible here. 2025-11-01 13:51:45.610414 | orchestrator | 2025-11-01 13:51:45.610490 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 13:51:45.610503 | orchestrator | 2025-11-01 13:51:45.610513 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-11-01 13:51:45.610523 | orchestrator | Saturday 01 November 2025 13:51:23 +0000 (0:00:00.134) 0:00:00.134 ***** 2025-11-01 13:51:45.610534 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-11-01 13:51:45.610543 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-11-01 13:51:45.610553 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-11-01 13:51:45.610562 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-11-01 13:51:45.610571 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-11-01 13:51:45.610581 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-11-01 13:51:45.610590 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-11-01 13:51:45.610599 | orchestrator | 2025-11-01 13:51:45.610608 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-11-01 13:51:45.610618 | orchestrator | 2025-11-01 13:51:45.610627 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-11-01 13:51:45.610636 | orchestrator | Saturday 01 November 2025 13:51:24 +0000 (0:00:00.830) 0:00:00.964 ***** 2025-11-01 13:51:45.610646 | orchestrator | ok: [testbed-manager] 2025-11-01 13:51:45.610656 | orchestrator | 2025-11-01 13:51:45.610665 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-11-01 13:51:45.610675 | orchestrator | 2025-11-01 13:51:45.610684 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-11-01 13:51:45.610694 | orchestrator | Saturday 01 November 2025 13:51:27 +0000 (0:00:02.661) 0:00:03.626 ***** 2025-11-01 13:51:45.610703 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:51:45.610712 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:51:45.610722 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:51:45.610731 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:51:45.610760 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:51:45.610770 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:51:45.610779 | orchestrator | 2025-11-01 13:51:45.610788 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-11-01 13:51:45.610797 | orchestrator | 2025-11-01 13:51:45.610807 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-11-01 13:51:45.610817 | orchestrator | Saturday 01 November 2025 13:51:28 +0000 (0:00:01.848) 0:00:05.475 ***** 2025-11-01 13:51:45.610826 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-11-01 13:51:45.610837 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-11-01 13:51:45.610846 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-11-01 13:51:45.610855 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-11-01 13:51:45.610865 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-11-01 13:51:45.610874 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-11-01 13:51:45.610883 | orchestrator | 2025-11-01 13:51:45.610892 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-11-01 13:51:45.610902 | orchestrator | Saturday 01 November 2025 13:51:30 +0000 (0:00:01.593) 0:00:07.069 ***** 2025-11-01 13:51:45.610911 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:51:45.610920 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:51:45.610929 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:51:45.610938 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:51:45.610948 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:51:45.610959 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:51:45.610970 | orchestrator | 2025-11-01 13:51:45.610980 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-11-01 13:51:45.610991 | orchestrator | Saturday 01 November 2025 13:51:34 +0000 (0:00:03.657) 0:00:10.726 ***** 2025-11-01 13:51:45.611001 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:51:45.611011 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:51:45.611021 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:51:45.611032 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:51:45.611042 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:51:45.611053 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:51:45.611063 | orchestrator | 2025-11-01 13:51:45.611074 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-11-01 13:51:45.611085 | orchestrator | 2025-11-01 13:51:45.611096 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-11-01 13:51:45.611107 | orchestrator | Saturday 01 November 2025 13:51:34 +0000 (0:00:00.729) 0:00:11.456 ***** 2025-11-01 13:51:45.611117 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:51:45.611127 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:51:45.611138 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:51:45.611148 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:51:45.611159 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:51:45.611169 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:51:45.611180 | orchestrator | changed: [testbed-manager] 2025-11-01 13:51:45.611190 | orchestrator | 2025-11-01 13:51:45.611201 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-11-01 13:51:45.611212 | orchestrator | Saturday 01 November 2025 13:51:36 +0000 (0:00:01.592) 0:00:13.048 ***** 2025-11-01 13:51:45.611223 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:51:45.611233 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:51:45.611243 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:51:45.611254 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:51:45.611265 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:51:45.611282 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:51:45.611305 | orchestrator | changed: [testbed-manager] 2025-11-01 13:51:45.611316 | orchestrator | 2025-11-01 13:51:45.611326 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-11-01 13:51:45.611335 | orchestrator | Saturday 01 November 2025 13:51:38 +0000 (0:00:01.770) 0:00:14.819 ***** 2025-11-01 13:51:45.611349 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:51:45.611379 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:51:45.611390 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:51:45.611399 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:51:45.611409 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:51:45.611418 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:51:45.611428 | orchestrator | ok: [testbed-manager] 2025-11-01 13:51:45.611437 | orchestrator | 2025-11-01 13:51:45.611446 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-11-01 13:51:45.611456 | orchestrator | Saturday 01 November 2025 13:51:39 +0000 (0:00:01.662) 0:00:16.482 ***** 2025-11-01 13:51:45.611465 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:51:45.611475 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:51:45.611484 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:51:45.611493 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:51:45.611503 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:51:45.611512 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:51:45.611521 | orchestrator | changed: [testbed-manager] 2025-11-01 13:51:45.611530 | orchestrator | 2025-11-01 13:51:45.611540 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-11-01 13:51:45.611550 | orchestrator | Saturday 01 November 2025 13:51:41 +0000 (0:00:01.936) 0:00:18.418 ***** 2025-11-01 13:51:45.611559 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:51:45.611583 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:51:45.611593 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:51:45.611602 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:51:45.611612 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:51:45.611621 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:51:45.611630 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:51:45.611640 | orchestrator | 2025-11-01 13:51:45.611649 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-11-01 13:51:45.611659 | orchestrator | 2025-11-01 13:51:45.611669 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-11-01 13:51:45.611678 | orchestrator | Saturday 01 November 2025 13:51:42 +0000 (0:00:00.773) 0:00:19.191 ***** 2025-11-01 13:51:45.611688 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:51:45.611697 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:51:45.611707 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:51:45.611716 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:51:45.611725 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:51:45.611735 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:51:45.611744 | orchestrator | ok: [testbed-manager] 2025-11-01 13:51:45.611754 | orchestrator | 2025-11-01 13:51:45.611764 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:51:45.611774 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 13:51:45.611785 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:51:45.611795 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:51:45.611805 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:51:45.611814 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:51:45.611831 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:51:45.611841 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:51:45.611851 | orchestrator | 2025-11-01 13:51:45.611860 | orchestrator | 2025-11-01 13:51:45.611870 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:51:45.611880 | orchestrator | Saturday 01 November 2025 13:51:45 +0000 (0:00:02.963) 0:00:22.155 ***** 2025-11-01 13:51:45.611889 | orchestrator | =============================================================================== 2025-11-01 13:51:45.611899 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.66s 2025-11-01 13:51:45.611908 | orchestrator | Install python3-docker -------------------------------------------------- 2.96s 2025-11-01 13:51:45.611918 | orchestrator | Apply netplan configuration --------------------------------------------- 2.66s 2025-11-01 13:51:45.611928 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.94s 2025-11-01 13:51:45.611937 | orchestrator | Apply netplan configuration --------------------------------------------- 1.85s 2025-11-01 13:51:45.611951 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.77s 2025-11-01 13:51:45.611961 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.66s 2025-11-01 13:51:45.611971 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.59s 2025-11-01 13:51:45.611980 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.59s 2025-11-01 13:51:45.611990 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.83s 2025-11-01 13:51:45.611999 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.77s 2025-11-01 13:51:45.612014 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.73s 2025-11-01 13:51:46.325349 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-11-01 13:51:58.456191 | orchestrator | 2025-11-01 13:51:58 | INFO  | Task 958d0e9f-e371-4d46-9358-1a5f414012f8 (reboot) was prepared for execution. 2025-11-01 13:51:58.456292 | orchestrator | 2025-11-01 13:51:58 | INFO  | It takes a moment until task 958d0e9f-e371-4d46-9358-1a5f414012f8 (reboot) has been started and output is visible here. 2025-11-01 13:52:09.218176 | orchestrator | 2025-11-01 13:52:09.218268 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-11-01 13:52:09.218289 | orchestrator | 2025-11-01 13:52:09.218309 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-11-01 13:52:09.218330 | orchestrator | Saturday 01 November 2025 13:52:02 +0000 (0:00:00.214) 0:00:00.214 ***** 2025-11-01 13:52:09.218347 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:52:09.218429 | orchestrator | 2025-11-01 13:52:09.218451 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-11-01 13:52:09.218471 | orchestrator | Saturday 01 November 2025 13:52:02 +0000 (0:00:00.104) 0:00:00.318 ***** 2025-11-01 13:52:09.218489 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:52:09.218507 | orchestrator | 2025-11-01 13:52:09.218527 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-11-01 13:52:09.218547 | orchestrator | Saturday 01 November 2025 13:52:03 +0000 (0:00:00.994) 0:00:01.312 ***** 2025-11-01 13:52:09.218565 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:52:09.218585 | orchestrator | 2025-11-01 13:52:09.218604 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-11-01 13:52:09.218622 | orchestrator | 2025-11-01 13:52:09.218636 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-11-01 13:52:09.218647 | orchestrator | Saturday 01 November 2025 13:52:03 +0000 (0:00:00.139) 0:00:01.452 ***** 2025-11-01 13:52:09.218658 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:52:09.218702 | orchestrator | 2025-11-01 13:52:09.218723 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-11-01 13:52:09.218743 | orchestrator | Saturday 01 November 2025 13:52:04 +0000 (0:00:00.124) 0:00:01.576 ***** 2025-11-01 13:52:09.218763 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:52:09.218783 | orchestrator | 2025-11-01 13:52:09.218803 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-11-01 13:52:09.218817 | orchestrator | Saturday 01 November 2025 13:52:04 +0000 (0:00:00.725) 0:00:02.301 ***** 2025-11-01 13:52:09.218830 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:52:09.218842 | orchestrator | 2025-11-01 13:52:09.218855 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-11-01 13:52:09.218867 | orchestrator | 2025-11-01 13:52:09.218879 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-11-01 13:52:09.218892 | orchestrator | Saturday 01 November 2025 13:52:04 +0000 (0:00:00.124) 0:00:02.426 ***** 2025-11-01 13:52:09.218904 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:52:09.218917 | orchestrator | 2025-11-01 13:52:09.218929 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-11-01 13:52:09.218941 | orchestrator | Saturday 01 November 2025 13:52:05 +0000 (0:00:00.255) 0:00:02.681 ***** 2025-11-01 13:52:09.218954 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:52:09.218966 | orchestrator | 2025-11-01 13:52:09.218978 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-11-01 13:52:09.218990 | orchestrator | Saturday 01 November 2025 13:52:05 +0000 (0:00:00.714) 0:00:03.396 ***** 2025-11-01 13:52:09.219002 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:52:09.219014 | orchestrator | 2025-11-01 13:52:09.219026 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-11-01 13:52:09.219038 | orchestrator | 2025-11-01 13:52:09.219051 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-11-01 13:52:09.219063 | orchestrator | Saturday 01 November 2025 13:52:06 +0000 (0:00:00.129) 0:00:03.525 ***** 2025-11-01 13:52:09.219076 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:52:09.219088 | orchestrator | 2025-11-01 13:52:09.219099 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-11-01 13:52:09.219110 | orchestrator | Saturday 01 November 2025 13:52:06 +0000 (0:00:00.135) 0:00:03.660 ***** 2025-11-01 13:52:09.219121 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:52:09.219131 | orchestrator | 2025-11-01 13:52:09.219142 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-11-01 13:52:09.219152 | orchestrator | Saturday 01 November 2025 13:52:06 +0000 (0:00:00.684) 0:00:04.345 ***** 2025-11-01 13:52:09.219163 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:52:09.219173 | orchestrator | 2025-11-01 13:52:09.219184 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-11-01 13:52:09.219194 | orchestrator | 2025-11-01 13:52:09.219205 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-11-01 13:52:09.219216 | orchestrator | Saturday 01 November 2025 13:52:07 +0000 (0:00:00.121) 0:00:04.467 ***** 2025-11-01 13:52:09.219226 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:52:09.219236 | orchestrator | 2025-11-01 13:52:09.219247 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-11-01 13:52:09.219258 | orchestrator | Saturday 01 November 2025 13:52:07 +0000 (0:00:00.112) 0:00:04.579 ***** 2025-11-01 13:52:09.219282 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:52:09.219293 | orchestrator | 2025-11-01 13:52:09.219304 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-11-01 13:52:09.219314 | orchestrator | Saturday 01 November 2025 13:52:07 +0000 (0:00:00.716) 0:00:05.296 ***** 2025-11-01 13:52:09.219325 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:52:09.219335 | orchestrator | 2025-11-01 13:52:09.219346 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-11-01 13:52:09.219391 | orchestrator | 2025-11-01 13:52:09.219404 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-11-01 13:52:09.219414 | orchestrator | Saturday 01 November 2025 13:52:07 +0000 (0:00:00.120) 0:00:05.416 ***** 2025-11-01 13:52:09.219425 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:52:09.219436 | orchestrator | 2025-11-01 13:52:09.219446 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-11-01 13:52:09.219457 | orchestrator | Saturday 01 November 2025 13:52:08 +0000 (0:00:00.114) 0:00:05.530 ***** 2025-11-01 13:52:09.219467 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:52:09.219478 | orchestrator | 2025-11-01 13:52:09.219488 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-11-01 13:52:09.219499 | orchestrator | Saturday 01 November 2025 13:52:08 +0000 (0:00:00.705) 0:00:06.236 ***** 2025-11-01 13:52:09.219529 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:52:09.219540 | orchestrator | 2025-11-01 13:52:09.219551 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:52:09.219564 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:52:09.219585 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:52:09.219605 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:52:09.219624 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:52:09.219645 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:52:09.219658 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:52:09.219669 | orchestrator | 2025-11-01 13:52:09.219680 | orchestrator | 2025-11-01 13:52:09.219690 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:52:09.219701 | orchestrator | Saturday 01 November 2025 13:52:08 +0000 (0:00:00.040) 0:00:06.277 ***** 2025-11-01 13:52:09.219712 | orchestrator | =============================================================================== 2025-11-01 13:52:09.219723 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.54s 2025-11-01 13:52:09.219733 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.85s 2025-11-01 13:52:09.219744 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.68s 2025-11-01 13:52:09.581786 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-11-01 13:52:21.877904 | orchestrator | 2025-11-01 13:52:21 | INFO  | Task 5ad513ad-c2d8-4940-83bd-084ea636737b (wait-for-connection) was prepared for execution. 2025-11-01 13:52:21.877998 | orchestrator | 2025-11-01 13:52:21 | INFO  | It takes a moment until task 5ad513ad-c2d8-4940-83bd-084ea636737b (wait-for-connection) has been started and output is visible here. 2025-11-01 13:52:39.453176 | orchestrator | 2025-11-01 13:52:39.453255 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-11-01 13:52:39.453268 | orchestrator | 2025-11-01 13:52:39.453278 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-11-01 13:52:39.453288 | orchestrator | Saturday 01 November 2025 13:52:27 +0000 (0:00:00.272) 0:00:00.272 ***** 2025-11-01 13:52:39.453298 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:52:39.453308 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:52:39.453317 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:52:39.453327 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:52:39.453336 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:52:39.453413 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:52:39.453425 | orchestrator | 2025-11-01 13:52:39.453436 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:52:39.453446 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:52:39.453457 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:52:39.453467 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:52:39.453477 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:52:39.453486 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:52:39.453506 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:52:39.453516 | orchestrator | 2025-11-01 13:52:39.453526 | orchestrator | 2025-11-01 13:52:39.453535 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:52:39.453545 | orchestrator | Saturday 01 November 2025 13:52:39 +0000 (0:00:11.712) 0:00:11.985 ***** 2025-11-01 13:52:39.453554 | orchestrator | =============================================================================== 2025-11-01 13:52:39.453564 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.71s 2025-11-01 13:52:39.783016 | orchestrator | + osism apply hddtemp 2025-11-01 13:52:51.875490 | orchestrator | 2025-11-01 13:52:51 | INFO  | Task 5da7f1de-0298-448a-9319-7305456536c8 (hddtemp) was prepared for execution. 2025-11-01 13:52:51.875590 | orchestrator | 2025-11-01 13:52:51 | INFO  | It takes a moment until task 5da7f1de-0298-448a-9319-7305456536c8 (hddtemp) has been started and output is visible here. 2025-11-01 13:53:22.217890 | orchestrator | 2025-11-01 13:53:22.217978 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-11-01 13:53:22.217993 | orchestrator | 2025-11-01 13:53:22.218005 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-11-01 13:53:22.218064 | orchestrator | Saturday 01 November 2025 13:52:56 +0000 (0:00:00.292) 0:00:00.292 ***** 2025-11-01 13:53:22.218077 | orchestrator | ok: [testbed-manager] 2025-11-01 13:53:22.218089 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:53:22.218100 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:53:22.218110 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:53:22.218121 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:53:22.218131 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:53:22.218142 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:53:22.218153 | orchestrator | 2025-11-01 13:53:22.218164 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-11-01 13:53:22.218175 | orchestrator | Saturday 01 November 2025 13:52:57 +0000 (0:00:00.752) 0:00:01.044 ***** 2025-11-01 13:53:22.218187 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:53:22.218200 | orchestrator | 2025-11-01 13:53:22.218211 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-11-01 13:53:22.218222 | orchestrator | Saturday 01 November 2025 13:52:58 +0000 (0:00:01.286) 0:00:02.330 ***** 2025-11-01 13:53:22.218233 | orchestrator | ok: [testbed-manager] 2025-11-01 13:53:22.218244 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:53:22.218254 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:53:22.218265 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:53:22.218275 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:53:22.218306 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:53:22.218317 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:53:22.218328 | orchestrator | 2025-11-01 13:53:22.218338 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-11-01 13:53:22.218349 | orchestrator | Saturday 01 November 2025 13:53:00 +0000 (0:00:02.263) 0:00:04.594 ***** 2025-11-01 13:53:22.218385 | orchestrator | changed: [testbed-manager] 2025-11-01 13:53:22.218398 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:53:22.218408 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:53:22.218419 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:53:22.218430 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:53:22.218440 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:53:22.218451 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:53:22.218461 | orchestrator | 2025-11-01 13:53:22.218472 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-11-01 13:53:22.218483 | orchestrator | Saturday 01 November 2025 13:53:01 +0000 (0:00:01.366) 0:00:05.960 ***** 2025-11-01 13:53:22.218494 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:53:22.218504 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:53:22.218515 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:53:22.218525 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:53:22.218536 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:53:22.218546 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:53:22.218557 | orchestrator | ok: [testbed-manager] 2025-11-01 13:53:22.218568 | orchestrator | 2025-11-01 13:53:22.218579 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-11-01 13:53:22.218589 | orchestrator | Saturday 01 November 2025 13:53:03 +0000 (0:00:01.249) 0:00:07.209 ***** 2025-11-01 13:53:22.218600 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:53:22.218611 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:53:22.218621 | orchestrator | changed: [testbed-manager] 2025-11-01 13:53:22.218632 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:53:22.218643 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:53:22.218654 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:53:22.218664 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:53:22.218675 | orchestrator | 2025-11-01 13:53:22.218686 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-11-01 13:53:22.218697 | orchestrator | Saturday 01 November 2025 13:53:04 +0000 (0:00:00.885) 0:00:08.095 ***** 2025-11-01 13:53:22.218707 | orchestrator | changed: [testbed-manager] 2025-11-01 13:53:22.218718 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:53:22.218729 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:53:22.218739 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:53:22.218750 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:53:22.218760 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:53:22.218771 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:53:22.218781 | orchestrator | 2025-11-01 13:53:22.218792 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-11-01 13:53:22.218803 | orchestrator | Saturday 01 November 2025 13:53:18 +0000 (0:00:14.360) 0:00:22.456 ***** 2025-11-01 13:53:22.218827 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:53:22.218838 | orchestrator | 2025-11-01 13:53:22.218849 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-11-01 13:53:22.218860 | orchestrator | Saturday 01 November 2025 13:53:19 +0000 (0:00:01.267) 0:00:23.723 ***** 2025-11-01 13:53:22.218871 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:53:22.218881 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:53:22.218892 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:53:22.218902 | orchestrator | changed: [testbed-manager] 2025-11-01 13:53:22.218913 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:53:22.218923 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:53:22.218941 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:53:22.218952 | orchestrator | 2025-11-01 13:53:22.218963 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:53:22.218974 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:53:22.219001 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 13:53:22.219013 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 13:53:22.219024 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 13:53:22.219036 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 13:53:22.219046 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 13:53:22.219057 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 13:53:22.219068 | orchestrator | 2025-11-01 13:53:22.219079 | orchestrator | 2025-11-01 13:53:22.219090 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:53:22.219101 | orchestrator | Saturday 01 November 2025 13:53:21 +0000 (0:00:02.040) 0:00:25.764 ***** 2025-11-01 13:53:22.219111 | orchestrator | =============================================================================== 2025-11-01 13:53:22.219122 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 14.36s 2025-11-01 13:53:22.219133 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.26s 2025-11-01 13:53:22.219143 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.04s 2025-11-01 13:53:22.219154 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.37s 2025-11-01 13:53:22.219165 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.29s 2025-11-01 13:53:22.219175 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.27s 2025-11-01 13:53:22.219186 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.25s 2025-11-01 13:53:22.219197 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.89s 2025-11-01 13:53:22.219208 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.75s 2025-11-01 13:53:22.562132 | orchestrator | ++ semver latest 7.1.1 2025-11-01 13:53:22.621440 | orchestrator | + [[ -1 -ge 0 ]] 2025-11-01 13:53:22.621473 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-11-01 13:53:22.621486 | orchestrator | + sudo systemctl restart manager.service 2025-11-01 13:53:39.726674 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-11-01 13:53:39.726745 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-11-01 13:53:39.726759 | orchestrator | + local max_attempts=60 2025-11-01 13:53:39.726772 | orchestrator | + local name=ceph-ansible 2025-11-01 13:53:39.726783 | orchestrator | + local attempt_num=1 2025-11-01 13:53:39.726794 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-01 13:53:39.754093 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-01 13:53:39.754123 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-01 13:53:39.754135 | orchestrator | + sleep 5 2025-11-01 13:53:44.758098 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-01 13:53:44.792537 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-01 13:53:44.792703 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-01 13:53:44.792723 | orchestrator | + sleep 5 2025-11-01 13:53:49.795972 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-01 13:53:49.815888 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-01 13:53:49.815908 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-01 13:53:49.815914 | orchestrator | + sleep 5 2025-11-01 13:53:54.820636 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-01 13:53:54.862698 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-01 13:53:54.862755 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-01 13:53:54.862764 | orchestrator | + sleep 5 2025-11-01 13:53:59.869971 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-01 13:53:59.912299 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-01 13:53:59.912343 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-01 13:53:59.912355 | orchestrator | + sleep 5 2025-11-01 13:54:04.918191 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-01 13:54:04.959788 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-01 13:54:04.959838 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-01 13:54:04.959852 | orchestrator | + sleep 5 2025-11-01 13:54:09.964932 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-01 13:54:10.006965 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-01 13:54:10.007009 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-01 13:54:10.007022 | orchestrator | + sleep 5 2025-11-01 13:54:15.010598 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-01 13:54:15.050162 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-11-01 13:54:15.050243 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-01 13:54:15.050258 | orchestrator | + sleep 5 2025-11-01 13:54:20.054482 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-01 13:54:20.103308 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-11-01 13:54:20.103389 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-01 13:54:20.103405 | orchestrator | + sleep 5 2025-11-01 13:54:25.107560 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-01 13:54:25.145744 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-11-01 13:54:25.145799 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-01 13:54:25.145812 | orchestrator | + sleep 5 2025-11-01 13:54:30.150617 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-01 13:54:30.192625 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-11-01 13:54:30.192677 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-01 13:54:30.192690 | orchestrator | + sleep 5 2025-11-01 13:54:35.198333 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-01 13:54:35.240207 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-11-01 13:54:35.240260 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-01 13:54:35.240273 | orchestrator | + sleep 5 2025-11-01 13:54:40.245691 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-01 13:54:40.284796 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-11-01 13:54:40.284879 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-01 13:54:40.284895 | orchestrator | + sleep 5 2025-11-01 13:54:45.289127 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-01 13:54:45.325656 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-11-01 13:54:45.325742 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-11-01 13:54:45.325755 | orchestrator | + local max_attempts=60 2025-11-01 13:54:45.325767 | orchestrator | + local name=kolla-ansible 2025-11-01 13:54:45.325777 | orchestrator | + local attempt_num=1 2025-11-01 13:54:45.326055 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-11-01 13:54:45.358510 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-11-01 13:54:45.358561 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-11-01 13:54:45.358573 | orchestrator | + local max_attempts=60 2025-11-01 13:54:45.358584 | orchestrator | + local name=osism-ansible 2025-11-01 13:54:45.358594 | orchestrator | + local attempt_num=1 2025-11-01 13:54:45.359480 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-11-01 13:54:45.400916 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-11-01 13:54:45.400962 | orchestrator | + [[ true == \t\r\u\e ]] 2025-11-01 13:54:45.400975 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-11-01 13:54:45.563313 | orchestrator | ARA in ceph-ansible already disabled. 2025-11-01 13:54:45.741592 | orchestrator | ARA in kolla-ansible already disabled. 2025-11-01 13:54:45.920360 | orchestrator | ARA in osism-ansible already disabled. 2025-11-01 13:54:46.106402 | orchestrator | ARA in osism-kubernetes already disabled. 2025-11-01 13:54:46.107822 | orchestrator | + osism apply gather-facts 2025-11-01 13:54:58.320143 | orchestrator | 2025-11-01 13:54:58 | INFO  | Task ee40ffc4-2fc2-4ad2-9ac5-8a6d585c2c13 (gather-facts) was prepared for execution. 2025-11-01 13:54:58.320259 | orchestrator | 2025-11-01 13:54:58 | INFO  | It takes a moment until task ee40ffc4-2fc2-4ad2-9ac5-8a6d585c2c13 (gather-facts) has been started and output is visible here. 2025-11-01 13:55:12.762679 | orchestrator | 2025-11-01 13:55:12.762760 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-11-01 13:55:12.762774 | orchestrator | 2025-11-01 13:55:12.762786 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-01 13:55:12.762797 | orchestrator | Saturday 01 November 2025 13:55:02 +0000 (0:00:00.227) 0:00:00.227 ***** 2025-11-01 13:55:12.762808 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:55:12.762820 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:55:12.762831 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:55:12.762842 | orchestrator | ok: [testbed-manager] 2025-11-01 13:55:12.762852 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:55:12.762863 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:55:12.762873 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:55:12.762884 | orchestrator | 2025-11-01 13:55:12.762895 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-11-01 13:55:12.762905 | orchestrator | 2025-11-01 13:55:12.762916 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-11-01 13:55:12.762927 | orchestrator | Saturday 01 November 2025 13:55:11 +0000 (0:00:08.949) 0:00:09.177 ***** 2025-11-01 13:55:12.762938 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:55:12.762949 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:55:12.762960 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:55:12.762971 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:55:12.762981 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:55:12.762991 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:55:12.763002 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:55:12.763012 | orchestrator | 2025-11-01 13:55:12.763023 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:55:12.763034 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 13:55:12.763046 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 13:55:12.763057 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 13:55:12.763067 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 13:55:12.763078 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 13:55:12.763089 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 13:55:12.763100 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 13:55:12.763110 | orchestrator | 2025-11-01 13:55:12.763121 | orchestrator | 2025-11-01 13:55:12.763149 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:55:12.763161 | orchestrator | Saturday 01 November 2025 13:55:12 +0000 (0:00:00.588) 0:00:09.766 ***** 2025-11-01 13:55:12.763172 | orchestrator | =============================================================================== 2025-11-01 13:55:12.763182 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.95s 2025-11-01 13:55:12.763215 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2025-11-01 13:55:13.089547 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-11-01 13:55:13.101811 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-11-01 13:55:13.119438 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-11-01 13:55:13.136035 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-11-01 13:55:13.155933 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-11-01 13:55:13.169149 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-11-01 13:55:13.182652 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-11-01 13:55:13.195820 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-11-01 13:55:13.208603 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-11-01 13:55:13.229270 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-11-01 13:55:13.244538 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-11-01 13:55:13.259495 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-11-01 13:55:13.274643 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-11-01 13:55:13.288770 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-11-01 13:55:13.302908 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-11-01 13:55:13.314305 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-11-01 13:55:13.327736 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-11-01 13:55:13.342177 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-11-01 13:55:13.366734 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-11-01 13:55:13.382615 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-11-01 13:55:13.398118 | orchestrator | + [[ false == \t\r\u\e ]] 2025-11-01 13:55:13.692387 | orchestrator | ok: Runtime: 0:25:07.862210 2025-11-01 13:55:13.792446 | 2025-11-01 13:55:13.792572 | TASK [Deploy services] 2025-11-01 13:55:14.323216 | orchestrator | skipping: Conditional result was False 2025-11-01 13:55:14.333543 | 2025-11-01 13:55:14.333674 | TASK [Deploy in a nutshell] 2025-11-01 13:55:14.990621 | orchestrator | + set -e 2025-11-01 13:55:14.992163 | orchestrator | 2025-11-01 13:55:14.992194 | orchestrator | # PULL IMAGES 2025-11-01 13:55:14.992200 | orchestrator | 2025-11-01 13:55:14.992212 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-11-01 13:55:14.992221 | orchestrator | ++ export INTERACTIVE=false 2025-11-01 13:55:14.992227 | orchestrator | ++ INTERACTIVE=false 2025-11-01 13:55:14.992243 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-11-01 13:55:14.992252 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-11-01 13:55:14.992258 | orchestrator | + source /opt/manager-vars.sh 2025-11-01 13:55:14.992262 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-11-01 13:55:14.992270 | orchestrator | ++ NUMBER_OF_NODES=6 2025-11-01 13:55:14.992274 | orchestrator | ++ export CEPH_VERSION=reef 2025-11-01 13:55:14.992281 | orchestrator | ++ CEPH_VERSION=reef 2025-11-01 13:55:14.992285 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-11-01 13:55:14.992292 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-11-01 13:55:14.992297 | orchestrator | ++ export MANAGER_VERSION=latest 2025-11-01 13:55:14.992302 | orchestrator | ++ MANAGER_VERSION=latest 2025-11-01 13:55:14.992307 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-11-01 13:55:14.992311 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-11-01 13:55:14.992315 | orchestrator | ++ export ARA=false 2025-11-01 13:55:14.992318 | orchestrator | ++ ARA=false 2025-11-01 13:55:14.992322 | orchestrator | ++ export DEPLOY_MODE=manager 2025-11-01 13:55:14.992326 | orchestrator | ++ DEPLOY_MODE=manager 2025-11-01 13:55:14.992330 | orchestrator | ++ export TEMPEST=false 2025-11-01 13:55:14.992334 | orchestrator | ++ TEMPEST=false 2025-11-01 13:55:14.992337 | orchestrator | ++ export IS_ZUUL=true 2025-11-01 13:55:14.992341 | orchestrator | ++ IS_ZUUL=true 2025-11-01 13:55:14.992345 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.208 2025-11-01 13:55:14.992349 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.208 2025-11-01 13:55:14.992353 | orchestrator | ++ export EXTERNAL_API=false 2025-11-01 13:55:14.992357 | orchestrator | ++ EXTERNAL_API=false 2025-11-01 13:55:14.992360 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-11-01 13:55:14.992409 | orchestrator | ++ IMAGE_USER=ubuntu 2025-11-01 13:55:14.992414 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-11-01 13:55:14.992418 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-11-01 13:55:14.992422 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-11-01 13:55:14.992425 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-11-01 13:55:14.992429 | orchestrator | + echo 2025-11-01 13:55:14.992438 | orchestrator | + echo '# PULL IMAGES' 2025-11-01 13:55:14.992442 | orchestrator | + echo 2025-11-01 13:55:14.992769 | orchestrator | ++ semver latest 7.0.0 2025-11-01 13:55:15.056789 | orchestrator | + [[ -1 -ge 0 ]] 2025-11-01 13:55:15.056822 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-11-01 13:55:15.056827 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-11-01 13:55:17.017839 | orchestrator | 2025-11-01 13:55:17 | INFO  | Trying to run play pull-images in environment custom 2025-11-01 13:55:27.100113 | orchestrator | 2025-11-01 13:55:27 | INFO  | Task 8afba20c-a9b3-471a-a9f8-c4cb1506ceec (pull-images) was prepared for execution. 2025-11-01 13:55:27.100910 | orchestrator | 2025-11-01 13:55:27 | INFO  | Task 8afba20c-a9b3-471a-a9f8-c4cb1506ceec is running in background. No more output. Check ARA for logs. 2025-11-01 13:55:29.499528 | orchestrator | 2025-11-01 13:55:29 | INFO  | Trying to run play wipe-partitions in environment custom 2025-11-01 13:55:39.690796 | orchestrator | 2025-11-01 13:55:39 | INFO  | Task 7788855b-01c6-4022-ba83-678b2126a582 (wipe-partitions) was prepared for execution. 2025-11-01 13:55:39.690873 | orchestrator | 2025-11-01 13:55:39 | INFO  | It takes a moment until task 7788855b-01c6-4022-ba83-678b2126a582 (wipe-partitions) has been started and output is visible here. 2025-11-01 13:55:55.410349 | orchestrator | 2025-11-01 13:55:55.410516 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-11-01 13:55:55.410540 | orchestrator | 2025-11-01 13:55:55.410559 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-11-01 13:55:55.410588 | orchestrator | Saturday 01 November 2025 13:55:45 +0000 (0:00:00.150) 0:00:00.150 ***** 2025-11-01 13:55:55.410608 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:55:55.410628 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:55:55.410648 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:55:55.410666 | orchestrator | 2025-11-01 13:55:55.410685 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-11-01 13:55:55.410740 | orchestrator | Saturday 01 November 2025 13:55:46 +0000 (0:00:00.590) 0:00:00.740 ***** 2025-11-01 13:55:55.410759 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:55:55.410776 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:55:55.410800 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:55:55.410818 | orchestrator | 2025-11-01 13:55:55.410836 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-11-01 13:55:55.410855 | orchestrator | Saturday 01 November 2025 13:55:46 +0000 (0:00:00.388) 0:00:01.129 ***** 2025-11-01 13:55:55.410874 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:55:55.410896 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:55:55.410920 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:55:55.410938 | orchestrator | 2025-11-01 13:55:55.410956 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-11-01 13:55:55.410974 | orchestrator | Saturday 01 November 2025 13:55:47 +0000 (0:00:00.675) 0:00:01.805 ***** 2025-11-01 13:55:55.410993 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:55:55.411011 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:55:55.411029 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:55:55.411047 | orchestrator | 2025-11-01 13:55:55.411067 | orchestrator | TASK [Check device availability] *********************************************** 2025-11-01 13:55:55.411085 | orchestrator | Saturday 01 November 2025 13:55:47 +0000 (0:00:00.316) 0:00:02.122 ***** 2025-11-01 13:55:55.411103 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-11-01 13:55:55.411130 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-11-01 13:55:55.411150 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-11-01 13:55:55.411169 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-11-01 13:55:55.411188 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-11-01 13:55:55.411208 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-11-01 13:55:55.411227 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-11-01 13:55:55.411243 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-11-01 13:55:55.411260 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-11-01 13:55:55.411278 | orchestrator | 2025-11-01 13:55:55.411295 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-11-01 13:55:55.411313 | orchestrator | Saturday 01 November 2025 13:55:48 +0000 (0:00:01.309) 0:00:03.431 ***** 2025-11-01 13:55:55.411331 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-11-01 13:55:55.411348 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-11-01 13:55:55.411403 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-11-01 13:55:55.411426 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-11-01 13:55:55.411443 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-11-01 13:55:55.411460 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-11-01 13:55:55.411478 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-11-01 13:55:55.411496 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-11-01 13:55:55.411514 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-11-01 13:55:55.411532 | orchestrator | 2025-11-01 13:55:55.411550 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-11-01 13:55:55.411569 | orchestrator | Saturday 01 November 2025 13:55:50 +0000 (0:00:01.604) 0:00:05.036 ***** 2025-11-01 13:55:55.411587 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-11-01 13:55:55.411604 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-11-01 13:55:55.411623 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-11-01 13:55:55.411639 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-11-01 13:55:55.411655 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-11-01 13:55:55.411673 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-11-01 13:55:55.411690 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-11-01 13:55:55.411729 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-11-01 13:55:55.411759 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-11-01 13:55:55.411778 | orchestrator | 2025-11-01 13:55:55.411796 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-11-01 13:55:55.411812 | orchestrator | Saturday 01 November 2025 13:55:53 +0000 (0:00:03.209) 0:00:08.245 ***** 2025-11-01 13:55:55.411828 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:55:55.411844 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:55:55.411862 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:55:55.411880 | orchestrator | 2025-11-01 13:55:55.411898 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-11-01 13:55:55.411917 | orchestrator | Saturday 01 November 2025 13:55:54 +0000 (0:00:00.603) 0:00:08.849 ***** 2025-11-01 13:55:55.411935 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:55:55.411948 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:55:55.411959 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:55:55.411970 | orchestrator | 2025-11-01 13:55:55.411980 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:55:55.411996 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:55:55.412009 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:55:55.412047 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:55:55.412058 | orchestrator | 2025-11-01 13:55:55.412069 | orchestrator | 2025-11-01 13:55:55.412080 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:55:55.412091 | orchestrator | Saturday 01 November 2025 13:55:54 +0000 (0:00:00.694) 0:00:09.544 ***** 2025-11-01 13:55:55.412101 | orchestrator | =============================================================================== 2025-11-01 13:55:55.412112 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 3.21s 2025-11-01 13:55:55.412122 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.60s 2025-11-01 13:55:55.412133 | orchestrator | Check device availability ----------------------------------------------- 1.31s 2025-11-01 13:55:55.412144 | orchestrator | Request device events from the kernel ----------------------------------- 0.69s 2025-11-01 13:55:55.412154 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.68s 2025-11-01 13:55:55.412165 | orchestrator | Reload udev rules ------------------------------------------------------- 0.60s 2025-11-01 13:55:55.412175 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2025-11-01 13:55:55.412186 | orchestrator | Remove all rook related logical devices --------------------------------- 0.39s 2025-11-01 13:55:55.412197 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.32s 2025-11-01 13:56:07.997900 | orchestrator | 2025-11-01 13:56:07 | INFO  | Task 9e92339f-4b71-4a28-afc8-df679e0a4834 (facts) was prepared for execution. 2025-11-01 13:56:07.998005 | orchestrator | 2025-11-01 13:56:07 | INFO  | It takes a moment until task 9e92339f-4b71-4a28-afc8-df679e0a4834 (facts) has been started and output is visible here. 2025-11-01 13:56:21.212978 | orchestrator | 2025-11-01 13:56:21.213098 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-11-01 13:56:21.213115 | orchestrator | 2025-11-01 13:56:21.213127 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-11-01 13:56:21.213139 | orchestrator | Saturday 01 November 2025 13:56:12 +0000 (0:00:00.287) 0:00:00.287 ***** 2025-11-01 13:56:21.213151 | orchestrator | ok: [testbed-manager] 2025-11-01 13:56:21.213163 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:56:21.213174 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:56:21.213209 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:56:21.213220 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:56:21.213231 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:56:21.213241 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:56:21.213252 | orchestrator | 2025-11-01 13:56:21.213263 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-11-01 13:56:21.213274 | orchestrator | Saturday 01 November 2025 13:56:13 +0000 (0:00:01.184) 0:00:01.472 ***** 2025-11-01 13:56:21.213284 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:56:21.213296 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:56:21.213307 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:56:21.213317 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:56:21.213328 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:21.213338 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:21.213349 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:56:21.213359 | orchestrator | 2025-11-01 13:56:21.213422 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-11-01 13:56:21.213434 | orchestrator | 2025-11-01 13:56:21.213460 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-01 13:56:21.213472 | orchestrator | Saturday 01 November 2025 13:56:15 +0000 (0:00:01.324) 0:00:02.797 ***** 2025-11-01 13:56:21.213482 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:56:21.213493 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:56:21.213504 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:56:21.213515 | orchestrator | ok: [testbed-manager] 2025-11-01 13:56:21.213528 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:56:21.213539 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:56:21.213551 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:56:21.213563 | orchestrator | 2025-11-01 13:56:21.213575 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-11-01 13:56:21.213587 | orchestrator | 2025-11-01 13:56:21.213599 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-11-01 13:56:21.213611 | orchestrator | Saturday 01 November 2025 13:56:20 +0000 (0:00:04.838) 0:00:07.635 ***** 2025-11-01 13:56:21.213623 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:56:21.213635 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:56:21.213646 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:56:21.213659 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:56:21.213671 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:21.213682 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:21.213694 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:56:21.213706 | orchestrator | 2025-11-01 13:56:21.213718 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:56:21.213730 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:56:21.213744 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:56:21.213756 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:56:21.213768 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:56:21.213780 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:56:21.213792 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:56:21.213804 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:56:21.213815 | orchestrator | 2025-11-01 13:56:21.213835 | orchestrator | 2025-11-01 13:56:21.213847 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:56:21.213860 | orchestrator | Saturday 01 November 2025 13:56:20 +0000 (0:00:00.608) 0:00:08.244 ***** 2025-11-01 13:56:21.213872 | orchestrator | =============================================================================== 2025-11-01 13:56:21.213884 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.84s 2025-11-01 13:56:21.213894 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.32s 2025-11-01 13:56:21.213905 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.18s 2025-11-01 13:56:21.213916 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.61s 2025-11-01 13:56:23.662485 | orchestrator | 2025-11-01 13:56:23 | INFO  | Task 64a2fc5f-70df-4541-8e0a-c600a2fae726 (ceph-configure-lvm-volumes) was prepared for execution. 2025-11-01 13:56:23.662562 | orchestrator | 2025-11-01 13:56:23 | INFO  | It takes a moment until task 64a2fc5f-70df-4541-8e0a-c600a2fae726 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-11-01 13:56:36.269168 | orchestrator | 2025-11-01 13:56:36.269249 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-11-01 13:56:36.269257 | orchestrator | 2025-11-01 13:56:36.269264 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-01 13:56:36.269270 | orchestrator | Saturday 01 November 2025 13:56:28 +0000 (0:00:00.365) 0:00:00.365 ***** 2025-11-01 13:56:36.269276 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-01 13:56:36.269282 | orchestrator | 2025-11-01 13:56:36.269288 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-11-01 13:56:36.269293 | orchestrator | Saturday 01 November 2025 13:56:28 +0000 (0:00:00.269) 0:00:00.635 ***** 2025-11-01 13:56:36.269299 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:56:36.269305 | orchestrator | 2025-11-01 13:56:36.269310 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:56:36.269316 | orchestrator | Saturday 01 November 2025 13:56:28 +0000 (0:00:00.246) 0:00:00.881 ***** 2025-11-01 13:56:36.269321 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-11-01 13:56:36.269327 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-11-01 13:56:36.269333 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-11-01 13:56:36.269345 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-11-01 13:56:36.269351 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-11-01 13:56:36.269356 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-11-01 13:56:36.269362 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-11-01 13:56:36.269367 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-11-01 13:56:36.269405 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-11-01 13:56:36.269411 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-11-01 13:56:36.269416 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-11-01 13:56:36.269422 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-11-01 13:56:36.269427 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-11-01 13:56:36.269432 | orchestrator | 2025-11-01 13:56:36.269438 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:56:36.269443 | orchestrator | Saturday 01 November 2025 13:56:29 +0000 (0:00:00.487) 0:00:01.368 ***** 2025-11-01 13:56:36.269449 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:36.269470 | orchestrator | 2025-11-01 13:56:36.269476 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:56:36.269481 | orchestrator | Saturday 01 November 2025 13:56:29 +0000 (0:00:00.203) 0:00:01.572 ***** 2025-11-01 13:56:36.269487 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:36.269492 | orchestrator | 2025-11-01 13:56:36.269498 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:56:36.269503 | orchestrator | Saturday 01 November 2025 13:56:29 +0000 (0:00:00.203) 0:00:01.776 ***** 2025-11-01 13:56:36.269508 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:36.269514 | orchestrator | 2025-11-01 13:56:36.269519 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:56:36.269525 | orchestrator | Saturday 01 November 2025 13:56:30 +0000 (0:00:00.209) 0:00:01.985 ***** 2025-11-01 13:56:36.269530 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:36.269539 | orchestrator | 2025-11-01 13:56:36.269544 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:56:36.269550 | orchestrator | Saturday 01 November 2025 13:56:30 +0000 (0:00:00.221) 0:00:02.207 ***** 2025-11-01 13:56:36.269555 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:36.269561 | orchestrator | 2025-11-01 13:56:36.269566 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:56:36.269572 | orchestrator | Saturday 01 November 2025 13:56:30 +0000 (0:00:00.223) 0:00:02.431 ***** 2025-11-01 13:56:36.269577 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:36.269582 | orchestrator | 2025-11-01 13:56:36.269588 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:56:36.269593 | orchestrator | Saturday 01 November 2025 13:56:30 +0000 (0:00:00.197) 0:00:02.629 ***** 2025-11-01 13:56:36.269598 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:36.269604 | orchestrator | 2025-11-01 13:56:36.269609 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:56:36.269615 | orchestrator | Saturday 01 November 2025 13:56:30 +0000 (0:00:00.216) 0:00:02.845 ***** 2025-11-01 13:56:36.269620 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:36.269625 | orchestrator | 2025-11-01 13:56:36.269631 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:56:36.269636 | orchestrator | Saturday 01 November 2025 13:56:31 +0000 (0:00:00.204) 0:00:03.049 ***** 2025-11-01 13:56:36.269641 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede) 2025-11-01 13:56:36.269648 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede) 2025-11-01 13:56:36.269653 | orchestrator | 2025-11-01 13:56:36.269659 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:56:36.269664 | orchestrator | Saturday 01 November 2025 13:56:31 +0000 (0:00:00.434) 0:00:03.484 ***** 2025-11-01 13:56:36.269680 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4fee078c-1565-4ab1-bdda-b8bebdd42045) 2025-11-01 13:56:36.269686 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4fee078c-1565-4ab1-bdda-b8bebdd42045) 2025-11-01 13:56:36.269691 | orchestrator | 2025-11-01 13:56:36.269697 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:56:36.269702 | orchestrator | Saturday 01 November 2025 13:56:32 +0000 (0:00:00.692) 0:00:04.177 ***** 2025-11-01 13:56:36.269711 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c17a8236-4766-4598-abab-5d58d5ce65a6) 2025-11-01 13:56:36.269717 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c17a8236-4766-4598-abab-5d58d5ce65a6) 2025-11-01 13:56:36.269722 | orchestrator | 2025-11-01 13:56:36.269727 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:56:36.269733 | orchestrator | Saturday 01 November 2025 13:56:32 +0000 (0:00:00.671) 0:00:04.848 ***** 2025-11-01 13:56:36.269738 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7d89e604-ccfa-4ce6-abe5-76180138882d) 2025-11-01 13:56:36.269749 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7d89e604-ccfa-4ce6-abe5-76180138882d) 2025-11-01 13:56:36.269755 | orchestrator | 2025-11-01 13:56:36.269760 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:56:36.269766 | orchestrator | Saturday 01 November 2025 13:56:33 +0000 (0:00:00.885) 0:00:05.734 ***** 2025-11-01 13:56:36.269771 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-11-01 13:56:36.269776 | orchestrator | 2025-11-01 13:56:36.269782 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:56:36.269787 | orchestrator | Saturday 01 November 2025 13:56:34 +0000 (0:00:00.358) 0:00:06.093 ***** 2025-11-01 13:56:36.269792 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-11-01 13:56:36.269797 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-11-01 13:56:36.269803 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-11-01 13:56:36.269808 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-11-01 13:56:36.269813 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-11-01 13:56:36.269819 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-11-01 13:56:36.269824 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-11-01 13:56:36.269829 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-11-01 13:56:36.269835 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-11-01 13:56:36.269840 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-11-01 13:56:36.269845 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-11-01 13:56:36.269851 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-11-01 13:56:36.269856 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-11-01 13:56:36.269861 | orchestrator | 2025-11-01 13:56:36.269867 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:56:36.269872 | orchestrator | Saturday 01 November 2025 13:56:34 +0000 (0:00:00.389) 0:00:06.482 ***** 2025-11-01 13:56:36.269877 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:36.269883 | orchestrator | 2025-11-01 13:56:36.269888 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:56:36.269893 | orchestrator | Saturday 01 November 2025 13:56:34 +0000 (0:00:00.212) 0:00:06.695 ***** 2025-11-01 13:56:36.269898 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:36.269904 | orchestrator | 2025-11-01 13:56:36.269909 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:56:36.269914 | orchestrator | Saturday 01 November 2025 13:56:34 +0000 (0:00:00.215) 0:00:06.911 ***** 2025-11-01 13:56:36.269920 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:36.269925 | orchestrator | 2025-11-01 13:56:36.269930 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:56:36.269936 | orchestrator | Saturday 01 November 2025 13:56:35 +0000 (0:00:00.212) 0:00:07.124 ***** 2025-11-01 13:56:36.269941 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:36.269946 | orchestrator | 2025-11-01 13:56:36.269952 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:56:36.269957 | orchestrator | Saturday 01 November 2025 13:56:35 +0000 (0:00:00.204) 0:00:07.329 ***** 2025-11-01 13:56:36.269962 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:36.269967 | orchestrator | 2025-11-01 13:56:36.269977 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:56:36.269982 | orchestrator | Saturday 01 November 2025 13:56:35 +0000 (0:00:00.224) 0:00:07.553 ***** 2025-11-01 13:56:36.269988 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:36.269993 | orchestrator | 2025-11-01 13:56:36.269999 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:56:36.270004 | orchestrator | Saturday 01 November 2025 13:56:35 +0000 (0:00:00.216) 0:00:07.769 ***** 2025-11-01 13:56:36.270009 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:36.270014 | orchestrator | 2025-11-01 13:56:36.270056 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:56:36.270061 | orchestrator | Saturday 01 November 2025 13:56:36 +0000 (0:00:00.188) 0:00:07.957 ***** 2025-11-01 13:56:36.270070 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:44.168407 | orchestrator | 2025-11-01 13:56:44.168498 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:56:44.168511 | orchestrator | Saturday 01 November 2025 13:56:36 +0000 (0:00:00.222) 0:00:08.180 ***** 2025-11-01 13:56:44.168520 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-11-01 13:56:44.168529 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-11-01 13:56:44.168538 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-11-01 13:56:44.168546 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-11-01 13:56:44.168554 | orchestrator | 2025-11-01 13:56:44.168562 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:56:44.168570 | orchestrator | Saturday 01 November 2025 13:56:37 +0000 (0:00:01.092) 0:00:09.272 ***** 2025-11-01 13:56:44.168594 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:44.168603 | orchestrator | 2025-11-01 13:56:44.168611 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:56:44.168619 | orchestrator | Saturday 01 November 2025 13:56:37 +0000 (0:00:00.215) 0:00:09.488 ***** 2025-11-01 13:56:44.168627 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:44.168635 | orchestrator | 2025-11-01 13:56:44.168642 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:56:44.168650 | orchestrator | Saturday 01 November 2025 13:56:37 +0000 (0:00:00.197) 0:00:09.686 ***** 2025-11-01 13:56:44.168658 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:44.168666 | orchestrator | 2025-11-01 13:56:44.168674 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:56:44.168682 | orchestrator | Saturday 01 November 2025 13:56:37 +0000 (0:00:00.223) 0:00:09.910 ***** 2025-11-01 13:56:44.168690 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:44.168698 | orchestrator | 2025-11-01 13:56:44.168706 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-11-01 13:56:44.168713 | orchestrator | Saturday 01 November 2025 13:56:38 +0000 (0:00:00.202) 0:00:10.112 ***** 2025-11-01 13:56:44.168721 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-11-01 13:56:44.168729 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-11-01 13:56:44.168737 | orchestrator | 2025-11-01 13:56:44.168745 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-11-01 13:56:44.168753 | orchestrator | Saturday 01 November 2025 13:56:38 +0000 (0:00:00.181) 0:00:10.294 ***** 2025-11-01 13:56:44.168761 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:44.168768 | orchestrator | 2025-11-01 13:56:44.168776 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-11-01 13:56:44.168784 | orchestrator | Saturday 01 November 2025 13:56:38 +0000 (0:00:00.143) 0:00:10.438 ***** 2025-11-01 13:56:44.168792 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:44.168800 | orchestrator | 2025-11-01 13:56:44.168808 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-11-01 13:56:44.168816 | orchestrator | Saturday 01 November 2025 13:56:38 +0000 (0:00:00.160) 0:00:10.599 ***** 2025-11-01 13:56:44.168823 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:44.168850 | orchestrator | 2025-11-01 13:56:44.168858 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-11-01 13:56:44.168866 | orchestrator | Saturday 01 November 2025 13:56:38 +0000 (0:00:00.151) 0:00:10.750 ***** 2025-11-01 13:56:44.168874 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:56:44.168882 | orchestrator | 2025-11-01 13:56:44.168890 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-11-01 13:56:44.168898 | orchestrator | Saturday 01 November 2025 13:56:38 +0000 (0:00:00.135) 0:00:10.886 ***** 2025-11-01 13:56:44.168906 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '47edfe94-e799-500a-9f78-eae255c41273'}}) 2025-11-01 13:56:44.168914 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'efff7302-70e8-5bbc-90af-2166d1a25777'}}) 2025-11-01 13:56:44.168922 | orchestrator | 2025-11-01 13:56:44.168930 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-11-01 13:56:44.168938 | orchestrator | Saturday 01 November 2025 13:56:39 +0000 (0:00:00.170) 0:00:11.056 ***** 2025-11-01 13:56:44.168946 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '47edfe94-e799-500a-9f78-eae255c41273'}})  2025-11-01 13:56:44.168961 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'efff7302-70e8-5bbc-90af-2166d1a25777'}})  2025-11-01 13:56:44.168969 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:44.168977 | orchestrator | 2025-11-01 13:56:44.168985 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-11-01 13:56:44.168993 | orchestrator | Saturday 01 November 2025 13:56:39 +0000 (0:00:00.154) 0:00:11.210 ***** 2025-11-01 13:56:44.169001 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '47edfe94-e799-500a-9f78-eae255c41273'}})  2025-11-01 13:56:44.169009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'efff7302-70e8-5bbc-90af-2166d1a25777'}})  2025-11-01 13:56:44.169017 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:44.169024 | orchestrator | 2025-11-01 13:56:44.169032 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-11-01 13:56:44.169040 | orchestrator | Saturday 01 November 2025 13:56:39 +0000 (0:00:00.359) 0:00:11.570 ***** 2025-11-01 13:56:44.169047 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '47edfe94-e799-500a-9f78-eae255c41273'}})  2025-11-01 13:56:44.169055 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'efff7302-70e8-5bbc-90af-2166d1a25777'}})  2025-11-01 13:56:44.169063 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:44.169071 | orchestrator | 2025-11-01 13:56:44.169091 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-11-01 13:56:44.169100 | orchestrator | Saturday 01 November 2025 13:56:39 +0000 (0:00:00.186) 0:00:11.756 ***** 2025-11-01 13:56:44.169108 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:56:44.169116 | orchestrator | 2025-11-01 13:56:44.169123 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-11-01 13:56:44.169131 | orchestrator | Saturday 01 November 2025 13:56:40 +0000 (0:00:00.185) 0:00:11.942 ***** 2025-11-01 13:56:44.169139 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:56:44.169146 | orchestrator | 2025-11-01 13:56:44.169154 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-11-01 13:56:44.169162 | orchestrator | Saturday 01 November 2025 13:56:40 +0000 (0:00:00.174) 0:00:12.116 ***** 2025-11-01 13:56:44.169170 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:44.169177 | orchestrator | 2025-11-01 13:56:44.169185 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-11-01 13:56:44.169193 | orchestrator | Saturday 01 November 2025 13:56:40 +0000 (0:00:00.159) 0:00:12.276 ***** 2025-11-01 13:56:44.169200 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:44.169208 | orchestrator | 2025-11-01 13:56:44.169222 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-11-01 13:56:44.169230 | orchestrator | Saturday 01 November 2025 13:56:40 +0000 (0:00:00.131) 0:00:12.407 ***** 2025-11-01 13:56:44.169238 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:44.169245 | orchestrator | 2025-11-01 13:56:44.169253 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-11-01 13:56:44.169261 | orchestrator | Saturday 01 November 2025 13:56:40 +0000 (0:00:00.155) 0:00:12.562 ***** 2025-11-01 13:56:44.169269 | orchestrator | ok: [testbed-node-3] => { 2025-11-01 13:56:44.169276 | orchestrator |  "ceph_osd_devices": { 2025-11-01 13:56:44.169284 | orchestrator |  "sdb": { 2025-11-01 13:56:44.169292 | orchestrator |  "osd_lvm_uuid": "47edfe94-e799-500a-9f78-eae255c41273" 2025-11-01 13:56:44.169300 | orchestrator |  }, 2025-11-01 13:56:44.169308 | orchestrator |  "sdc": { 2025-11-01 13:56:44.169315 | orchestrator |  "osd_lvm_uuid": "efff7302-70e8-5bbc-90af-2166d1a25777" 2025-11-01 13:56:44.169323 | orchestrator |  } 2025-11-01 13:56:44.169331 | orchestrator |  } 2025-11-01 13:56:44.169339 | orchestrator | } 2025-11-01 13:56:44.169346 | orchestrator | 2025-11-01 13:56:44.169354 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-11-01 13:56:44.169362 | orchestrator | Saturday 01 November 2025 13:56:40 +0000 (0:00:00.153) 0:00:12.716 ***** 2025-11-01 13:56:44.169392 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:44.169401 | orchestrator | 2025-11-01 13:56:44.169409 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-11-01 13:56:44.169416 | orchestrator | Saturday 01 November 2025 13:56:40 +0000 (0:00:00.156) 0:00:12.873 ***** 2025-11-01 13:56:44.169429 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:44.169437 | orchestrator | 2025-11-01 13:56:44.169445 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-11-01 13:56:44.169452 | orchestrator | Saturday 01 November 2025 13:56:41 +0000 (0:00:00.163) 0:00:13.036 ***** 2025-11-01 13:56:44.169460 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:56:44.169468 | orchestrator | 2025-11-01 13:56:44.169476 | orchestrator | TASK [Print configuration data] ************************************************ 2025-11-01 13:56:44.169483 | orchestrator | Saturday 01 November 2025 13:56:41 +0000 (0:00:00.140) 0:00:13.176 ***** 2025-11-01 13:56:44.169491 | orchestrator | changed: [testbed-node-3] => { 2025-11-01 13:56:44.169499 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-11-01 13:56:44.169506 | orchestrator |  "ceph_osd_devices": { 2025-11-01 13:56:44.169514 | orchestrator |  "sdb": { 2025-11-01 13:56:44.169522 | orchestrator |  "osd_lvm_uuid": "47edfe94-e799-500a-9f78-eae255c41273" 2025-11-01 13:56:44.169530 | orchestrator |  }, 2025-11-01 13:56:44.169538 | orchestrator |  "sdc": { 2025-11-01 13:56:44.169545 | orchestrator |  "osd_lvm_uuid": "efff7302-70e8-5bbc-90af-2166d1a25777" 2025-11-01 13:56:44.169553 | orchestrator |  } 2025-11-01 13:56:44.169561 | orchestrator |  }, 2025-11-01 13:56:44.169568 | orchestrator |  "lvm_volumes": [ 2025-11-01 13:56:44.169576 | orchestrator |  { 2025-11-01 13:56:44.169584 | orchestrator |  "data": "osd-block-47edfe94-e799-500a-9f78-eae255c41273", 2025-11-01 13:56:44.169591 | orchestrator |  "data_vg": "ceph-47edfe94-e799-500a-9f78-eae255c41273" 2025-11-01 13:56:44.169599 | orchestrator |  }, 2025-11-01 13:56:44.169607 | orchestrator |  { 2025-11-01 13:56:44.169614 | orchestrator |  "data": "osd-block-efff7302-70e8-5bbc-90af-2166d1a25777", 2025-11-01 13:56:44.169622 | orchestrator |  "data_vg": "ceph-efff7302-70e8-5bbc-90af-2166d1a25777" 2025-11-01 13:56:44.169630 | orchestrator |  } 2025-11-01 13:56:44.169637 | orchestrator |  ] 2025-11-01 13:56:44.169645 | orchestrator |  } 2025-11-01 13:56:44.169653 | orchestrator | } 2025-11-01 13:56:44.169660 | orchestrator | 2025-11-01 13:56:44.169668 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-11-01 13:56:44.169682 | orchestrator | Saturday 01 November 2025 13:56:41 +0000 (0:00:00.437) 0:00:13.614 ***** 2025-11-01 13:56:44.169689 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-01 13:56:44.169697 | orchestrator | 2025-11-01 13:56:44.169705 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-11-01 13:56:44.169713 | orchestrator | 2025-11-01 13:56:44.169720 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-01 13:56:44.169728 | orchestrator | Saturday 01 November 2025 13:56:43 +0000 (0:00:01.952) 0:00:15.567 ***** 2025-11-01 13:56:44.169736 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-11-01 13:56:44.169743 | orchestrator | 2025-11-01 13:56:44.169751 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-11-01 13:56:44.169759 | orchestrator | Saturday 01 November 2025 13:56:43 +0000 (0:00:00.265) 0:00:15.832 ***** 2025-11-01 13:56:44.169766 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:56:44.169774 | orchestrator | 2025-11-01 13:56:44.169782 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:56:44.169794 | orchestrator | Saturday 01 November 2025 13:56:44 +0000 (0:00:00.248) 0:00:16.080 ***** 2025-11-01 13:56:52.480334 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-11-01 13:56:52.480488 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-11-01 13:56:52.480504 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-11-01 13:56:52.480515 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-11-01 13:56:52.480526 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-11-01 13:56:52.480537 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-11-01 13:56:52.480548 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-11-01 13:56:52.480559 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-11-01 13:56:52.480569 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-11-01 13:56:52.480580 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-11-01 13:56:52.480613 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-11-01 13:56:52.480624 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-11-01 13:56:52.480635 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-11-01 13:56:52.480651 | orchestrator | 2025-11-01 13:56:52.480663 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:56:52.480675 | orchestrator | Saturday 01 November 2025 13:56:44 +0000 (0:00:00.363) 0:00:16.444 ***** 2025-11-01 13:56:52.480686 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:52.480697 | orchestrator | 2025-11-01 13:56:52.480709 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:56:52.480720 | orchestrator | Saturday 01 November 2025 13:56:44 +0000 (0:00:00.225) 0:00:16.669 ***** 2025-11-01 13:56:52.480730 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:52.480741 | orchestrator | 2025-11-01 13:56:52.480752 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:56:52.480762 | orchestrator | Saturday 01 November 2025 13:56:44 +0000 (0:00:00.208) 0:00:16.878 ***** 2025-11-01 13:56:52.480773 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:52.480783 | orchestrator | 2025-11-01 13:56:52.480794 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:56:52.480804 | orchestrator | Saturday 01 November 2025 13:56:45 +0000 (0:00:00.185) 0:00:17.064 ***** 2025-11-01 13:56:52.480815 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:52.480848 | orchestrator | 2025-11-01 13:56:52.480859 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:56:52.480870 | orchestrator | Saturday 01 November 2025 13:56:45 +0000 (0:00:00.221) 0:00:17.285 ***** 2025-11-01 13:56:52.480882 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:52.480894 | orchestrator | 2025-11-01 13:56:52.480906 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:56:52.480919 | orchestrator | Saturday 01 November 2025 13:56:45 +0000 (0:00:00.599) 0:00:17.885 ***** 2025-11-01 13:56:52.480931 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:52.480942 | orchestrator | 2025-11-01 13:56:52.480955 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:56:52.480966 | orchestrator | Saturday 01 November 2025 13:56:46 +0000 (0:00:00.236) 0:00:18.121 ***** 2025-11-01 13:56:52.480979 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:52.480991 | orchestrator | 2025-11-01 13:56:52.481003 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:56:52.481015 | orchestrator | Saturday 01 November 2025 13:56:46 +0000 (0:00:00.210) 0:00:18.332 ***** 2025-11-01 13:56:52.481026 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:52.481038 | orchestrator | 2025-11-01 13:56:52.481050 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:56:52.481062 | orchestrator | Saturday 01 November 2025 13:56:46 +0000 (0:00:00.213) 0:00:18.545 ***** 2025-11-01 13:56:52.481074 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad) 2025-11-01 13:56:52.481088 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad) 2025-11-01 13:56:52.481099 | orchestrator | 2025-11-01 13:56:52.481111 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:56:52.481123 | orchestrator | Saturday 01 November 2025 13:56:47 +0000 (0:00:00.444) 0:00:18.990 ***** 2025-11-01 13:56:52.481135 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_08ca9d91-9929-4ba3-9cad-ed75b64a043e) 2025-11-01 13:56:52.481147 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_08ca9d91-9929-4ba3-9cad-ed75b64a043e) 2025-11-01 13:56:52.481159 | orchestrator | 2025-11-01 13:56:52.481171 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:56:52.481183 | orchestrator | Saturday 01 November 2025 13:56:47 +0000 (0:00:00.431) 0:00:19.422 ***** 2025-11-01 13:56:52.481195 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_072d7475-b9a0-4b66-89cc-e4fcf46016ff) 2025-11-01 13:56:52.481207 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_072d7475-b9a0-4b66-89cc-e4fcf46016ff) 2025-11-01 13:56:52.481219 | orchestrator | 2025-11-01 13:56:52.481231 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:56:52.481242 | orchestrator | Saturday 01 November 2025 13:56:47 +0000 (0:00:00.453) 0:00:19.875 ***** 2025-11-01 13:56:52.481271 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d5b7cda2-7cd1-4139-8c09-f2864ed6115a) 2025-11-01 13:56:52.481282 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d5b7cda2-7cd1-4139-8c09-f2864ed6115a) 2025-11-01 13:56:52.481293 | orchestrator | 2025-11-01 13:56:52.481303 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:56:52.481314 | orchestrator | Saturday 01 November 2025 13:56:48 +0000 (0:00:00.458) 0:00:20.333 ***** 2025-11-01 13:56:52.481325 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-11-01 13:56:52.481336 | orchestrator | 2025-11-01 13:56:52.481346 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:56:52.481363 | orchestrator | Saturday 01 November 2025 13:56:48 +0000 (0:00:00.350) 0:00:20.684 ***** 2025-11-01 13:56:52.481401 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-11-01 13:56:52.481421 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-11-01 13:56:52.481431 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-11-01 13:56:52.481442 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-11-01 13:56:52.481452 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-11-01 13:56:52.481463 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-11-01 13:56:52.481473 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-11-01 13:56:52.481484 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-11-01 13:56:52.481494 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-11-01 13:56:52.481505 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-11-01 13:56:52.481516 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-11-01 13:56:52.481526 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-11-01 13:56:52.481537 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-11-01 13:56:52.481547 | orchestrator | 2025-11-01 13:56:52.481558 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:56:52.481569 | orchestrator | Saturday 01 November 2025 13:56:49 +0000 (0:00:00.389) 0:00:21.073 ***** 2025-11-01 13:56:52.481579 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:52.481590 | orchestrator | 2025-11-01 13:56:52.481600 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:56:52.481611 | orchestrator | Saturday 01 November 2025 13:56:49 +0000 (0:00:00.674) 0:00:21.748 ***** 2025-11-01 13:56:52.481621 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:52.481632 | orchestrator | 2025-11-01 13:56:52.481643 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:56:52.481653 | orchestrator | Saturday 01 November 2025 13:56:50 +0000 (0:00:00.200) 0:00:21.949 ***** 2025-11-01 13:56:52.481664 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:52.481674 | orchestrator | 2025-11-01 13:56:52.481685 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:56:52.481695 | orchestrator | Saturday 01 November 2025 13:56:50 +0000 (0:00:00.200) 0:00:22.149 ***** 2025-11-01 13:56:52.481706 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:52.481716 | orchestrator | 2025-11-01 13:56:52.481727 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:56:52.481738 | orchestrator | Saturday 01 November 2025 13:56:50 +0000 (0:00:00.217) 0:00:22.367 ***** 2025-11-01 13:56:52.481748 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:52.481759 | orchestrator | 2025-11-01 13:56:52.481769 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:56:52.481780 | orchestrator | Saturday 01 November 2025 13:56:50 +0000 (0:00:00.263) 0:00:22.630 ***** 2025-11-01 13:56:52.481790 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:52.481801 | orchestrator | 2025-11-01 13:56:52.481812 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:56:52.481822 | orchestrator | Saturday 01 November 2025 13:56:50 +0000 (0:00:00.219) 0:00:22.850 ***** 2025-11-01 13:56:52.481833 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:52.481843 | orchestrator | 2025-11-01 13:56:52.481854 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:56:52.481864 | orchestrator | Saturday 01 November 2025 13:56:51 +0000 (0:00:00.209) 0:00:23.060 ***** 2025-11-01 13:56:52.481875 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:52.481885 | orchestrator | 2025-11-01 13:56:52.481896 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:56:52.481913 | orchestrator | Saturday 01 November 2025 13:56:51 +0000 (0:00:00.223) 0:00:23.284 ***** 2025-11-01 13:56:52.481923 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-11-01 13:56:52.481935 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-11-01 13:56:52.481946 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-11-01 13:56:52.481956 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-11-01 13:56:52.481967 | orchestrator | 2025-11-01 13:56:52.481978 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:56:52.481988 | orchestrator | Saturday 01 November 2025 13:56:52 +0000 (0:00:00.902) 0:00:24.187 ***** 2025-11-01 13:56:52.481999 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:52.482009 | orchestrator | 2025-11-01 13:56:52.482087 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:56:59.930163 | orchestrator | Saturday 01 November 2025 13:56:52 +0000 (0:00:00.205) 0:00:24.392 ***** 2025-11-01 13:56:59.930275 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:59.930293 | orchestrator | 2025-11-01 13:56:59.930306 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:56:59.930318 | orchestrator | Saturday 01 November 2025 13:56:52 +0000 (0:00:00.213) 0:00:24.606 ***** 2025-11-01 13:56:59.930328 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:59.930339 | orchestrator | 2025-11-01 13:56:59.930350 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:56:59.930361 | orchestrator | Saturday 01 November 2025 13:56:52 +0000 (0:00:00.220) 0:00:24.827 ***** 2025-11-01 13:56:59.930439 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:59.930460 | orchestrator | 2025-11-01 13:56:59.930490 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-11-01 13:56:59.930515 | orchestrator | Saturday 01 November 2025 13:56:53 +0000 (0:00:00.759) 0:00:25.586 ***** 2025-11-01 13:56:59.930526 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-11-01 13:56:59.930537 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-11-01 13:56:59.930548 | orchestrator | 2025-11-01 13:56:59.930559 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-11-01 13:56:59.930570 | orchestrator | Saturday 01 November 2025 13:56:53 +0000 (0:00:00.189) 0:00:25.776 ***** 2025-11-01 13:56:59.930581 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:59.930592 | orchestrator | 2025-11-01 13:56:59.930603 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-11-01 13:56:59.930615 | orchestrator | Saturday 01 November 2025 13:56:54 +0000 (0:00:00.162) 0:00:25.938 ***** 2025-11-01 13:56:59.930626 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:59.930637 | orchestrator | 2025-11-01 13:56:59.930648 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-11-01 13:56:59.930658 | orchestrator | Saturday 01 November 2025 13:56:54 +0000 (0:00:00.150) 0:00:26.089 ***** 2025-11-01 13:56:59.930669 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:59.930682 | orchestrator | 2025-11-01 13:56:59.930694 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-11-01 13:56:59.930707 | orchestrator | Saturday 01 November 2025 13:56:54 +0000 (0:00:00.128) 0:00:26.217 ***** 2025-11-01 13:56:59.930719 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:56:59.930731 | orchestrator | 2025-11-01 13:56:59.930743 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-11-01 13:56:59.930756 | orchestrator | Saturday 01 November 2025 13:56:54 +0000 (0:00:00.142) 0:00:26.359 ***** 2025-11-01 13:56:59.930768 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bf0a4791-ac15-5066-8808-a0a6deeb0cc9'}}) 2025-11-01 13:56:59.930781 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5630d3b4-f241-5aa8-9956-015e1822542e'}}) 2025-11-01 13:56:59.930793 | orchestrator | 2025-11-01 13:56:59.930805 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-11-01 13:56:59.930840 | orchestrator | Saturday 01 November 2025 13:56:54 +0000 (0:00:00.196) 0:00:26.556 ***** 2025-11-01 13:56:59.930854 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bf0a4791-ac15-5066-8808-a0a6deeb0cc9'}})  2025-11-01 13:56:59.930867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5630d3b4-f241-5aa8-9956-015e1822542e'}})  2025-11-01 13:56:59.930880 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:59.930892 | orchestrator | 2025-11-01 13:56:59.930904 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-11-01 13:56:59.930916 | orchestrator | Saturday 01 November 2025 13:56:54 +0000 (0:00:00.235) 0:00:26.791 ***** 2025-11-01 13:56:59.930928 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bf0a4791-ac15-5066-8808-a0a6deeb0cc9'}})  2025-11-01 13:56:59.930940 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5630d3b4-f241-5aa8-9956-015e1822542e'}})  2025-11-01 13:56:59.930952 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:59.930964 | orchestrator | 2025-11-01 13:56:59.930976 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-11-01 13:56:59.930989 | orchestrator | Saturday 01 November 2025 13:56:55 +0000 (0:00:00.239) 0:00:27.031 ***** 2025-11-01 13:56:59.931000 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bf0a4791-ac15-5066-8808-a0a6deeb0cc9'}})  2025-11-01 13:56:59.931012 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5630d3b4-f241-5aa8-9956-015e1822542e'}})  2025-11-01 13:56:59.931025 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:59.931036 | orchestrator | 2025-11-01 13:56:59.931047 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-11-01 13:56:59.931058 | orchestrator | Saturday 01 November 2025 13:56:55 +0000 (0:00:00.182) 0:00:27.213 ***** 2025-11-01 13:56:59.931068 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:56:59.931079 | orchestrator | 2025-11-01 13:56:59.931090 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-11-01 13:56:59.931100 | orchestrator | Saturday 01 November 2025 13:56:55 +0000 (0:00:00.156) 0:00:27.370 ***** 2025-11-01 13:56:59.931111 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:56:59.931121 | orchestrator | 2025-11-01 13:56:59.931132 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-11-01 13:56:59.931142 | orchestrator | Saturday 01 November 2025 13:56:55 +0000 (0:00:00.158) 0:00:27.528 ***** 2025-11-01 13:56:59.931153 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:59.931163 | orchestrator | 2025-11-01 13:56:59.931190 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-11-01 13:56:59.931202 | orchestrator | Saturday 01 November 2025 13:56:55 +0000 (0:00:00.357) 0:00:27.885 ***** 2025-11-01 13:56:59.931212 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:59.931223 | orchestrator | 2025-11-01 13:56:59.931234 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-11-01 13:56:59.931244 | orchestrator | Saturday 01 November 2025 13:56:56 +0000 (0:00:00.129) 0:00:28.015 ***** 2025-11-01 13:56:59.931255 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:59.931265 | orchestrator | 2025-11-01 13:56:59.931276 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-11-01 13:56:59.931287 | orchestrator | Saturday 01 November 2025 13:56:56 +0000 (0:00:00.174) 0:00:28.189 ***** 2025-11-01 13:56:59.931297 | orchestrator | ok: [testbed-node-4] => { 2025-11-01 13:56:59.931308 | orchestrator |  "ceph_osd_devices": { 2025-11-01 13:56:59.931319 | orchestrator |  "sdb": { 2025-11-01 13:56:59.931329 | orchestrator |  "osd_lvm_uuid": "bf0a4791-ac15-5066-8808-a0a6deeb0cc9" 2025-11-01 13:56:59.931340 | orchestrator |  }, 2025-11-01 13:56:59.931350 | orchestrator |  "sdc": { 2025-11-01 13:56:59.931369 | orchestrator |  "osd_lvm_uuid": "5630d3b4-f241-5aa8-9956-015e1822542e" 2025-11-01 13:56:59.931410 | orchestrator |  } 2025-11-01 13:56:59.931421 | orchestrator |  } 2025-11-01 13:56:59.931432 | orchestrator | } 2025-11-01 13:56:59.931443 | orchestrator | 2025-11-01 13:56:59.931454 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-11-01 13:56:59.931464 | orchestrator | Saturday 01 November 2025 13:56:56 +0000 (0:00:00.149) 0:00:28.339 ***** 2025-11-01 13:56:59.931475 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:59.931485 | orchestrator | 2025-11-01 13:56:59.931503 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-11-01 13:56:59.931514 | orchestrator | Saturday 01 November 2025 13:56:56 +0000 (0:00:00.147) 0:00:28.486 ***** 2025-11-01 13:56:59.931524 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:59.931535 | orchestrator | 2025-11-01 13:56:59.931545 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-11-01 13:56:59.931556 | orchestrator | Saturday 01 November 2025 13:56:56 +0000 (0:00:00.169) 0:00:28.655 ***** 2025-11-01 13:56:59.931567 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:56:59.931577 | orchestrator | 2025-11-01 13:56:59.931588 | orchestrator | TASK [Print configuration data] ************************************************ 2025-11-01 13:56:59.931598 | orchestrator | Saturday 01 November 2025 13:56:56 +0000 (0:00:00.139) 0:00:28.794 ***** 2025-11-01 13:56:59.931608 | orchestrator | changed: [testbed-node-4] => { 2025-11-01 13:56:59.931619 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-11-01 13:56:59.931630 | orchestrator |  "ceph_osd_devices": { 2025-11-01 13:56:59.931640 | orchestrator |  "sdb": { 2025-11-01 13:56:59.931651 | orchestrator |  "osd_lvm_uuid": "bf0a4791-ac15-5066-8808-a0a6deeb0cc9" 2025-11-01 13:56:59.931666 | orchestrator |  }, 2025-11-01 13:56:59.931677 | orchestrator |  "sdc": { 2025-11-01 13:56:59.931688 | orchestrator |  "osd_lvm_uuid": "5630d3b4-f241-5aa8-9956-015e1822542e" 2025-11-01 13:56:59.931699 | orchestrator |  } 2025-11-01 13:56:59.931709 | orchestrator |  }, 2025-11-01 13:56:59.931720 | orchestrator |  "lvm_volumes": [ 2025-11-01 13:56:59.931730 | orchestrator |  { 2025-11-01 13:56:59.931741 | orchestrator |  "data": "osd-block-bf0a4791-ac15-5066-8808-a0a6deeb0cc9", 2025-11-01 13:56:59.931751 | orchestrator |  "data_vg": "ceph-bf0a4791-ac15-5066-8808-a0a6deeb0cc9" 2025-11-01 13:56:59.931762 | orchestrator |  }, 2025-11-01 13:56:59.931772 | orchestrator |  { 2025-11-01 13:56:59.931783 | orchestrator |  "data": "osd-block-5630d3b4-f241-5aa8-9956-015e1822542e", 2025-11-01 13:56:59.931793 | orchestrator |  "data_vg": "ceph-5630d3b4-f241-5aa8-9956-015e1822542e" 2025-11-01 13:56:59.931804 | orchestrator |  } 2025-11-01 13:56:59.931814 | orchestrator |  ] 2025-11-01 13:56:59.931825 | orchestrator |  } 2025-11-01 13:56:59.931835 | orchestrator | } 2025-11-01 13:56:59.931846 | orchestrator | 2025-11-01 13:56:59.931857 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-11-01 13:56:59.931867 | orchestrator | Saturday 01 November 2025 13:56:57 +0000 (0:00:00.259) 0:00:29.054 ***** 2025-11-01 13:56:59.931878 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-11-01 13:56:59.931888 | orchestrator | 2025-11-01 13:56:59.931899 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-11-01 13:56:59.931910 | orchestrator | 2025-11-01 13:56:59.931920 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-01 13:56:59.931931 | orchestrator | Saturday 01 November 2025 13:56:58 +0000 (0:00:01.203) 0:00:30.258 ***** 2025-11-01 13:56:59.931942 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-11-01 13:56:59.931952 | orchestrator | 2025-11-01 13:56:59.931962 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-11-01 13:56:59.931973 | orchestrator | Saturday 01 November 2025 13:56:59 +0000 (0:00:00.827) 0:00:31.085 ***** 2025-11-01 13:56:59.931991 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:56:59.932001 | orchestrator | 2025-11-01 13:56:59.932012 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:56:59.932022 | orchestrator | Saturday 01 November 2025 13:56:59 +0000 (0:00:00.283) 0:00:31.369 ***** 2025-11-01 13:56:59.932033 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-11-01 13:56:59.932044 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-11-01 13:56:59.932054 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-11-01 13:56:59.932065 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-11-01 13:56:59.932075 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-11-01 13:56:59.932086 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-11-01 13:56:59.932103 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-11-01 13:57:09.289812 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-11-01 13:57:09.289918 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-11-01 13:57:09.289932 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-11-01 13:57:09.289943 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-11-01 13:57:09.289954 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-11-01 13:57:09.289965 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-11-01 13:57:09.289976 | orchestrator | 2025-11-01 13:57:09.289988 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:57:09.290000 | orchestrator | Saturday 01 November 2025 13:56:59 +0000 (0:00:00.469) 0:00:31.839 ***** 2025-11-01 13:57:09.290011 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:09.290081 | orchestrator | 2025-11-01 13:57:09.290093 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:57:09.290103 | orchestrator | Saturday 01 November 2025 13:57:00 +0000 (0:00:00.248) 0:00:32.087 ***** 2025-11-01 13:57:09.290114 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:09.290125 | orchestrator | 2025-11-01 13:57:09.290136 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:57:09.290146 | orchestrator | Saturday 01 November 2025 13:57:00 +0000 (0:00:00.222) 0:00:32.310 ***** 2025-11-01 13:57:09.290157 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:09.290168 | orchestrator | 2025-11-01 13:57:09.290178 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:57:09.290189 | orchestrator | Saturday 01 November 2025 13:57:00 +0000 (0:00:00.227) 0:00:32.538 ***** 2025-11-01 13:57:09.290200 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:09.290211 | orchestrator | 2025-11-01 13:57:09.290221 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:57:09.290232 | orchestrator | Saturday 01 November 2025 13:57:00 +0000 (0:00:00.262) 0:00:32.800 ***** 2025-11-01 13:57:09.290243 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:09.290253 | orchestrator | 2025-11-01 13:57:09.290264 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:57:09.290275 | orchestrator | Saturday 01 November 2025 13:57:01 +0000 (0:00:00.218) 0:00:33.019 ***** 2025-11-01 13:57:09.290286 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:09.290296 | orchestrator | 2025-11-01 13:57:09.290307 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:57:09.290318 | orchestrator | Saturday 01 November 2025 13:57:01 +0000 (0:00:00.333) 0:00:33.353 ***** 2025-11-01 13:57:09.290329 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:09.290362 | orchestrator | 2025-11-01 13:57:09.290422 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:57:09.290436 | orchestrator | Saturday 01 November 2025 13:57:01 +0000 (0:00:00.231) 0:00:33.584 ***** 2025-11-01 13:57:09.290449 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:09.290461 | orchestrator | 2025-11-01 13:57:09.290489 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:57:09.290503 | orchestrator | Saturday 01 November 2025 13:57:01 +0000 (0:00:00.219) 0:00:33.803 ***** 2025-11-01 13:57:09.290516 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3) 2025-11-01 13:57:09.290529 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3) 2025-11-01 13:57:09.290541 | orchestrator | 2025-11-01 13:57:09.290554 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:57:09.290566 | orchestrator | Saturday 01 November 2025 13:57:02 +0000 (0:00:00.955) 0:00:34.758 ***** 2025-11-01 13:57:09.290577 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_dbba508b-4e10-452f-8431-011284f42e7d) 2025-11-01 13:57:09.290589 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_dbba508b-4e10-452f-8431-011284f42e7d) 2025-11-01 13:57:09.290602 | orchestrator | 2025-11-01 13:57:09.290614 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:57:09.290626 | orchestrator | Saturday 01 November 2025 13:57:03 +0000 (0:00:00.470) 0:00:35.229 ***** 2025-11-01 13:57:09.290638 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f57a5620-543a-43ae-a22d-8a42cad6fb24) 2025-11-01 13:57:09.290650 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f57a5620-543a-43ae-a22d-8a42cad6fb24) 2025-11-01 13:57:09.290662 | orchestrator | 2025-11-01 13:57:09.290674 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:57:09.290686 | orchestrator | Saturday 01 November 2025 13:57:03 +0000 (0:00:00.497) 0:00:35.726 ***** 2025-11-01 13:57:09.290698 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c347dc72-435c-43d5-a9cf-2c60f1de142e) 2025-11-01 13:57:09.290711 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c347dc72-435c-43d5-a9cf-2c60f1de142e) 2025-11-01 13:57:09.290721 | orchestrator | 2025-11-01 13:57:09.290732 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:57:09.290742 | orchestrator | Saturday 01 November 2025 13:57:04 +0000 (0:00:00.496) 0:00:36.223 ***** 2025-11-01 13:57:09.290753 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-11-01 13:57:09.290763 | orchestrator | 2025-11-01 13:57:09.290774 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:57:09.290784 | orchestrator | Saturday 01 November 2025 13:57:04 +0000 (0:00:00.441) 0:00:36.665 ***** 2025-11-01 13:57:09.290812 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-11-01 13:57:09.290824 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-11-01 13:57:09.290834 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-11-01 13:57:09.290844 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-11-01 13:57:09.290855 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-11-01 13:57:09.290865 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-11-01 13:57:09.290876 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-11-01 13:57:09.290886 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-11-01 13:57:09.290897 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-11-01 13:57:09.290916 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-11-01 13:57:09.290927 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-11-01 13:57:09.290938 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-11-01 13:57:09.290948 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-11-01 13:57:09.290959 | orchestrator | 2025-11-01 13:57:09.290969 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:57:09.290979 | orchestrator | Saturday 01 November 2025 13:57:05 +0000 (0:00:00.470) 0:00:37.136 ***** 2025-11-01 13:57:09.290990 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:09.291001 | orchestrator | 2025-11-01 13:57:09.291011 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:57:09.291022 | orchestrator | Saturday 01 November 2025 13:57:05 +0000 (0:00:00.314) 0:00:37.450 ***** 2025-11-01 13:57:09.291032 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:09.291043 | orchestrator | 2025-11-01 13:57:09.291053 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:57:09.291064 | orchestrator | Saturday 01 November 2025 13:57:05 +0000 (0:00:00.250) 0:00:37.701 ***** 2025-11-01 13:57:09.291074 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:09.291085 | orchestrator | 2025-11-01 13:57:09.291095 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:57:09.291106 | orchestrator | Saturday 01 November 2025 13:57:05 +0000 (0:00:00.205) 0:00:37.907 ***** 2025-11-01 13:57:09.291116 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:09.291127 | orchestrator | 2025-11-01 13:57:09.291137 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:57:09.291148 | orchestrator | Saturday 01 November 2025 13:57:06 +0000 (0:00:00.193) 0:00:38.100 ***** 2025-11-01 13:57:09.291158 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:09.291169 | orchestrator | 2025-11-01 13:57:09.291179 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:57:09.291190 | orchestrator | Saturday 01 November 2025 13:57:06 +0000 (0:00:00.249) 0:00:38.349 ***** 2025-11-01 13:57:09.291200 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:09.291211 | orchestrator | 2025-11-01 13:57:09.291221 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:57:09.291232 | orchestrator | Saturday 01 November 2025 13:57:07 +0000 (0:00:00.720) 0:00:39.070 ***** 2025-11-01 13:57:09.291243 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:09.291253 | orchestrator | 2025-11-01 13:57:09.291263 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:57:09.291274 | orchestrator | Saturday 01 November 2025 13:57:07 +0000 (0:00:00.204) 0:00:39.275 ***** 2025-11-01 13:57:09.291284 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:09.291295 | orchestrator | 2025-11-01 13:57:09.291305 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:57:09.291316 | orchestrator | Saturday 01 November 2025 13:57:07 +0000 (0:00:00.219) 0:00:39.494 ***** 2025-11-01 13:57:09.291327 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-11-01 13:57:09.291337 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-11-01 13:57:09.291348 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-11-01 13:57:09.291358 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-11-01 13:57:09.291369 | orchestrator | 2025-11-01 13:57:09.291417 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:57:09.291428 | orchestrator | Saturday 01 November 2025 13:57:08 +0000 (0:00:00.778) 0:00:40.273 ***** 2025-11-01 13:57:09.291439 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:09.291449 | orchestrator | 2025-11-01 13:57:09.291460 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:57:09.291482 | orchestrator | Saturday 01 November 2025 13:57:08 +0000 (0:00:00.206) 0:00:40.479 ***** 2025-11-01 13:57:09.291493 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:09.291504 | orchestrator | 2025-11-01 13:57:09.291514 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:57:09.291525 | orchestrator | Saturday 01 November 2025 13:57:08 +0000 (0:00:00.204) 0:00:40.683 ***** 2025-11-01 13:57:09.291536 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:09.291547 | orchestrator | 2025-11-01 13:57:09.291558 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:57:09.291568 | orchestrator | Saturday 01 November 2025 13:57:09 +0000 (0:00:00.237) 0:00:40.921 ***** 2025-11-01 13:57:09.291585 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:09.291596 | orchestrator | 2025-11-01 13:57:09.291607 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-11-01 13:57:09.291624 | orchestrator | Saturday 01 November 2025 13:57:09 +0000 (0:00:00.279) 0:00:41.200 ***** 2025-11-01 13:57:14.092155 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-11-01 13:57:14.092255 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-11-01 13:57:14.092269 | orchestrator | 2025-11-01 13:57:14.092282 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-11-01 13:57:14.092293 | orchestrator | Saturday 01 November 2025 13:57:09 +0000 (0:00:00.206) 0:00:41.407 ***** 2025-11-01 13:57:14.092304 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:14.092315 | orchestrator | 2025-11-01 13:57:14.092326 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-11-01 13:57:14.092336 | orchestrator | Saturday 01 November 2025 13:57:09 +0000 (0:00:00.134) 0:00:41.541 ***** 2025-11-01 13:57:14.092347 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:14.092357 | orchestrator | 2025-11-01 13:57:14.092368 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-11-01 13:57:14.092446 | orchestrator | Saturday 01 November 2025 13:57:09 +0000 (0:00:00.165) 0:00:41.707 ***** 2025-11-01 13:57:14.092458 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:14.092469 | orchestrator | 2025-11-01 13:57:14.092479 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-11-01 13:57:14.092490 | orchestrator | Saturday 01 November 2025 13:57:10 +0000 (0:00:00.397) 0:00:42.105 ***** 2025-11-01 13:57:14.092500 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:57:14.092513 | orchestrator | 2025-11-01 13:57:14.092524 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-11-01 13:57:14.092534 | orchestrator | Saturday 01 November 2025 13:57:10 +0000 (0:00:00.158) 0:00:42.263 ***** 2025-11-01 13:57:14.092546 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f'}}) 2025-11-01 13:57:14.092558 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7e540012-4fa7-591e-a498-149cbb5b09d9'}}) 2025-11-01 13:57:14.092569 | orchestrator | 2025-11-01 13:57:14.092579 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-11-01 13:57:14.092590 | orchestrator | Saturday 01 November 2025 13:57:10 +0000 (0:00:00.209) 0:00:42.473 ***** 2025-11-01 13:57:14.092601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f'}})  2025-11-01 13:57:14.092614 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7e540012-4fa7-591e-a498-149cbb5b09d9'}})  2025-11-01 13:57:14.092624 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:14.092635 | orchestrator | 2025-11-01 13:57:14.092662 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-11-01 13:57:14.092674 | orchestrator | Saturday 01 November 2025 13:57:10 +0000 (0:00:00.172) 0:00:42.645 ***** 2025-11-01 13:57:14.092684 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f'}})  2025-11-01 13:57:14.092716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7e540012-4fa7-591e-a498-149cbb5b09d9'}})  2025-11-01 13:57:14.092728 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:14.092741 | orchestrator | 2025-11-01 13:57:14.092753 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-11-01 13:57:14.092765 | orchestrator | Saturday 01 November 2025 13:57:10 +0000 (0:00:00.171) 0:00:42.817 ***** 2025-11-01 13:57:14.092777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f'}})  2025-11-01 13:57:14.092790 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7e540012-4fa7-591e-a498-149cbb5b09d9'}})  2025-11-01 13:57:14.092803 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:14.092815 | orchestrator | 2025-11-01 13:57:14.092827 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-11-01 13:57:14.092840 | orchestrator | Saturday 01 November 2025 13:57:11 +0000 (0:00:00.175) 0:00:42.992 ***** 2025-11-01 13:57:14.092852 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:57:14.092864 | orchestrator | 2025-11-01 13:57:14.092876 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-11-01 13:57:14.092888 | orchestrator | Saturday 01 November 2025 13:57:11 +0000 (0:00:00.178) 0:00:43.171 ***** 2025-11-01 13:57:14.092900 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:57:14.092912 | orchestrator | 2025-11-01 13:57:14.092924 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-11-01 13:57:14.092937 | orchestrator | Saturday 01 November 2025 13:57:11 +0000 (0:00:00.244) 0:00:43.418 ***** 2025-11-01 13:57:14.092950 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:14.092961 | orchestrator | 2025-11-01 13:57:14.092973 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-11-01 13:57:14.092985 | orchestrator | Saturday 01 November 2025 13:57:11 +0000 (0:00:00.255) 0:00:43.673 ***** 2025-11-01 13:57:14.092997 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:14.093009 | orchestrator | 2025-11-01 13:57:14.093022 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-11-01 13:57:14.093034 | orchestrator | Saturday 01 November 2025 13:57:11 +0000 (0:00:00.134) 0:00:43.808 ***** 2025-11-01 13:57:14.093047 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:14.093058 | orchestrator | 2025-11-01 13:57:14.093069 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-11-01 13:57:14.093079 | orchestrator | Saturday 01 November 2025 13:57:12 +0000 (0:00:00.129) 0:00:43.938 ***** 2025-11-01 13:57:14.093089 | orchestrator | ok: [testbed-node-5] => { 2025-11-01 13:57:14.093100 | orchestrator |  "ceph_osd_devices": { 2025-11-01 13:57:14.093111 | orchestrator |  "sdb": { 2025-11-01 13:57:14.093121 | orchestrator |  "osd_lvm_uuid": "8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f" 2025-11-01 13:57:14.093148 | orchestrator |  }, 2025-11-01 13:57:14.093159 | orchestrator |  "sdc": { 2025-11-01 13:57:14.093170 | orchestrator |  "osd_lvm_uuid": "7e540012-4fa7-591e-a498-149cbb5b09d9" 2025-11-01 13:57:14.093180 | orchestrator |  } 2025-11-01 13:57:14.093191 | orchestrator |  } 2025-11-01 13:57:14.093202 | orchestrator | } 2025-11-01 13:57:14.093213 | orchestrator | 2025-11-01 13:57:14.093224 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-11-01 13:57:14.093234 | orchestrator | Saturday 01 November 2025 13:57:12 +0000 (0:00:00.163) 0:00:44.101 ***** 2025-11-01 13:57:14.093245 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:14.093256 | orchestrator | 2025-11-01 13:57:14.093266 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-11-01 13:57:14.093277 | orchestrator | Saturday 01 November 2025 13:57:12 +0000 (0:00:00.138) 0:00:44.240 ***** 2025-11-01 13:57:14.093287 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:14.093298 | orchestrator | 2025-11-01 13:57:14.093308 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-11-01 13:57:14.093328 | orchestrator | Saturday 01 November 2025 13:57:12 +0000 (0:00:00.453) 0:00:44.693 ***** 2025-11-01 13:57:14.093339 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:57:14.093349 | orchestrator | 2025-11-01 13:57:14.093360 | orchestrator | TASK [Print configuration data] ************************************************ 2025-11-01 13:57:14.093394 | orchestrator | Saturday 01 November 2025 13:57:12 +0000 (0:00:00.144) 0:00:44.838 ***** 2025-11-01 13:57:14.093407 | orchestrator | changed: [testbed-node-5] => { 2025-11-01 13:57:14.093418 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-11-01 13:57:14.093429 | orchestrator |  "ceph_osd_devices": { 2025-11-01 13:57:14.093439 | orchestrator |  "sdb": { 2025-11-01 13:57:14.093450 | orchestrator |  "osd_lvm_uuid": "8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f" 2025-11-01 13:57:14.093461 | orchestrator |  }, 2025-11-01 13:57:14.093471 | orchestrator |  "sdc": { 2025-11-01 13:57:14.093482 | orchestrator |  "osd_lvm_uuid": "7e540012-4fa7-591e-a498-149cbb5b09d9" 2025-11-01 13:57:14.093492 | orchestrator |  } 2025-11-01 13:57:14.093503 | orchestrator |  }, 2025-11-01 13:57:14.093513 | orchestrator |  "lvm_volumes": [ 2025-11-01 13:57:14.093524 | orchestrator |  { 2025-11-01 13:57:14.093534 | orchestrator |  "data": "osd-block-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f", 2025-11-01 13:57:14.093545 | orchestrator |  "data_vg": "ceph-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f" 2025-11-01 13:57:14.093555 | orchestrator |  }, 2025-11-01 13:57:14.093565 | orchestrator |  { 2025-11-01 13:57:14.093576 | orchestrator |  "data": "osd-block-7e540012-4fa7-591e-a498-149cbb5b09d9", 2025-11-01 13:57:14.093587 | orchestrator |  "data_vg": "ceph-7e540012-4fa7-591e-a498-149cbb5b09d9" 2025-11-01 13:57:14.093597 | orchestrator |  } 2025-11-01 13:57:14.093608 | orchestrator |  ] 2025-11-01 13:57:14.093618 | orchestrator |  } 2025-11-01 13:57:14.093633 | orchestrator | } 2025-11-01 13:57:14.093644 | orchestrator | 2025-11-01 13:57:14.093654 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-11-01 13:57:14.093665 | orchestrator | Saturday 01 November 2025 13:57:13 +0000 (0:00:00.217) 0:00:45.055 ***** 2025-11-01 13:57:14.093675 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-11-01 13:57:14.093686 | orchestrator | 2025-11-01 13:57:14.093696 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:57:14.093715 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-01 13:57:14.093728 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-01 13:57:14.093738 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-01 13:57:14.093749 | orchestrator | 2025-11-01 13:57:14.093760 | orchestrator | 2025-11-01 13:57:14.093770 | orchestrator | 2025-11-01 13:57:14.093781 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:57:14.093791 | orchestrator | Saturday 01 November 2025 13:57:14 +0000 (0:00:00.934) 0:00:45.989 ***** 2025-11-01 13:57:14.093801 | orchestrator | =============================================================================== 2025-11-01 13:57:14.093812 | orchestrator | Write configuration file ------------------------------------------------ 4.09s 2025-11-01 13:57:14.093822 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.36s 2025-11-01 13:57:14.093833 | orchestrator | Add known links to the list of available block devices ------------------ 1.32s 2025-11-01 13:57:14.093843 | orchestrator | Add known partitions to the list of available block devices ------------- 1.25s 2025-11-01 13:57:14.093854 | orchestrator | Add known partitions to the list of available block devices ------------- 1.09s 2025-11-01 13:57:14.093873 | orchestrator | Add known links to the list of available block devices ------------------ 0.96s 2025-11-01 13:57:14.093884 | orchestrator | Print configuration data ------------------------------------------------ 0.91s 2025-11-01 13:57:14.093895 | orchestrator | Add known partitions to the list of available block devices ------------- 0.90s 2025-11-01 13:57:14.093905 | orchestrator | Add known links to the list of available block devices ------------------ 0.89s 2025-11-01 13:57:14.093916 | orchestrator | Print DB devices -------------------------------------------------------- 0.79s 2025-11-01 13:57:14.093926 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2025-11-01 13:57:14.093936 | orchestrator | Get initial list of available block devices ----------------------------- 0.78s 2025-11-01 13:57:14.093947 | orchestrator | Set DB devices config data ---------------------------------------------- 0.77s 2025-11-01 13:57:14.093958 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.77s 2025-11-01 13:57:14.093974 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2025-11-01 13:57:14.496075 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2025-11-01 13:57:14.496142 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2025-11-01 13:57:14.496154 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.68s 2025-11-01 13:57:14.496165 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2025-11-01 13:57:14.496176 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2025-11-01 13:57:37.415009 | orchestrator | 2025-11-01 13:57:37 | INFO  | Task 30b655ef-5b72-4d6a-81ae-5d2bcbff2518 (sync inventory) is running in background. Output coming soon. 2025-11-01 13:58:07.688824 | orchestrator | 2025-11-01 13:57:38 | INFO  | Starting group_vars file reorganization 2025-11-01 13:58:07.688938 | orchestrator | 2025-11-01 13:57:38 | INFO  | Moved 0 file(s) to their respective directories 2025-11-01 13:58:07.688953 | orchestrator | 2025-11-01 13:57:38 | INFO  | Group_vars file reorganization completed 2025-11-01 13:58:07.688965 | orchestrator | 2025-11-01 13:57:41 | INFO  | Starting variable preparation from inventory 2025-11-01 13:58:07.688977 | orchestrator | 2025-11-01 13:57:45 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-11-01 13:58:07.688988 | orchestrator | 2025-11-01 13:57:45 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-11-01 13:58:07.688999 | orchestrator | 2025-11-01 13:57:45 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-11-01 13:58:07.689010 | orchestrator | 2025-11-01 13:57:45 | INFO  | 3 file(s) written, 6 host(s) processed 2025-11-01 13:58:07.689021 | orchestrator | 2025-11-01 13:57:45 | INFO  | Variable preparation completed 2025-11-01 13:58:07.689032 | orchestrator | 2025-11-01 13:57:47 | INFO  | Starting inventory overwrite handling 2025-11-01 13:58:07.689043 | orchestrator | 2025-11-01 13:57:47 | INFO  | Handling group overwrites in 99-overwrite 2025-11-01 13:58:07.689054 | orchestrator | 2025-11-01 13:57:47 | INFO  | Removing group frr:children from 60-generic 2025-11-01 13:58:07.689065 | orchestrator | 2025-11-01 13:57:47 | INFO  | Removing group storage:children from 50-kolla 2025-11-01 13:58:07.689076 | orchestrator | 2025-11-01 13:57:47 | INFO  | Removing group netbird:children from 50-infrastructure 2025-11-01 13:58:07.689087 | orchestrator | 2025-11-01 13:57:47 | INFO  | Removing group ceph-rgw from 50-ceph 2025-11-01 13:58:07.689098 | orchestrator | 2025-11-01 13:57:47 | INFO  | Removing group ceph-mds from 50-ceph 2025-11-01 13:58:07.689109 | orchestrator | 2025-11-01 13:57:47 | INFO  | Handling group overwrites in 20-roles 2025-11-01 13:58:07.689120 | orchestrator | 2025-11-01 13:57:47 | INFO  | Removing group k3s_node from 50-infrastructure 2025-11-01 13:58:07.689155 | orchestrator | 2025-11-01 13:57:47 | INFO  | Removed 6 group(s) in total 2025-11-01 13:58:07.689166 | orchestrator | 2025-11-01 13:57:47 | INFO  | Inventory overwrite handling completed 2025-11-01 13:58:07.689177 | orchestrator | 2025-11-01 13:57:48 | INFO  | Starting merge of inventory files 2025-11-01 13:58:07.689188 | orchestrator | 2025-11-01 13:57:48 | INFO  | Inventory files merged successfully 2025-11-01 13:58:07.689198 | orchestrator | 2025-11-01 13:57:53 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-11-01 13:58:07.689209 | orchestrator | 2025-11-01 13:58:06 | INFO  | Successfully wrote ClusterShell configuration 2025-11-01 13:58:07.689221 | orchestrator | [master 55bcbd1] 2025-11-01-13-58 2025-11-01 13:58:07.689233 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-11-01 13:58:10.099467 | orchestrator | 2025-11-01 13:58:10 | INFO  | Task 1f1e5c9f-c38e-45a6-ac45-9723eeb81691 (ceph-create-lvm-devices) was prepared for execution. 2025-11-01 13:58:10.099564 | orchestrator | 2025-11-01 13:58:10 | INFO  | It takes a moment until task 1f1e5c9f-c38e-45a6-ac45-9723eeb81691 (ceph-create-lvm-devices) has been started and output is visible here. 2025-11-01 13:58:23.952518 | orchestrator | 2025-11-01 13:58:23.952632 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-11-01 13:58:23.952648 | orchestrator | 2025-11-01 13:58:23.952660 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-01 13:58:23.952671 | orchestrator | Saturday 01 November 2025 13:58:15 +0000 (0:00:00.357) 0:00:00.357 ***** 2025-11-01 13:58:23.952683 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-01 13:58:23.952694 | orchestrator | 2025-11-01 13:58:23.952704 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-11-01 13:58:23.952715 | orchestrator | Saturday 01 November 2025 13:58:15 +0000 (0:00:00.330) 0:00:00.688 ***** 2025-11-01 13:58:23.952726 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:58:23.952737 | orchestrator | 2025-11-01 13:58:23.952748 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:58:23.952759 | orchestrator | Saturday 01 November 2025 13:58:16 +0000 (0:00:00.267) 0:00:00.955 ***** 2025-11-01 13:58:23.952769 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-11-01 13:58:23.952782 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-11-01 13:58:23.952792 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-11-01 13:58:23.952803 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-11-01 13:58:23.952813 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-11-01 13:58:23.952824 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-11-01 13:58:23.952835 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-11-01 13:58:23.952845 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-11-01 13:58:23.952856 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-11-01 13:58:23.952867 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-11-01 13:58:23.952877 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-11-01 13:58:23.952888 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-11-01 13:58:23.952898 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-11-01 13:58:23.952909 | orchestrator | 2025-11-01 13:58:23.952919 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:58:23.952950 | orchestrator | Saturday 01 November 2025 13:58:16 +0000 (0:00:00.629) 0:00:01.585 ***** 2025-11-01 13:58:23.952962 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:23.952972 | orchestrator | 2025-11-01 13:58:23.952983 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:58:23.953011 | orchestrator | Saturday 01 November 2025 13:58:16 +0000 (0:00:00.258) 0:00:01.843 ***** 2025-11-01 13:58:23.953024 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:23.953036 | orchestrator | 2025-11-01 13:58:23.953048 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:58:23.953060 | orchestrator | Saturday 01 November 2025 13:58:17 +0000 (0:00:00.220) 0:00:02.063 ***** 2025-11-01 13:58:23.953077 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:23.953089 | orchestrator | 2025-11-01 13:58:23.953101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:58:23.953113 | orchestrator | Saturday 01 November 2025 13:58:17 +0000 (0:00:00.214) 0:00:02.278 ***** 2025-11-01 13:58:23.953124 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:23.953136 | orchestrator | 2025-11-01 13:58:23.953148 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:58:23.953160 | orchestrator | Saturday 01 November 2025 13:58:17 +0000 (0:00:00.209) 0:00:02.488 ***** 2025-11-01 13:58:23.953172 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:23.953183 | orchestrator | 2025-11-01 13:58:23.953195 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:58:23.953206 | orchestrator | Saturday 01 November 2025 13:58:17 +0000 (0:00:00.218) 0:00:02.706 ***** 2025-11-01 13:58:23.953218 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:23.953229 | orchestrator | 2025-11-01 13:58:23.953241 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:58:23.953252 | orchestrator | Saturday 01 November 2025 13:58:18 +0000 (0:00:00.246) 0:00:02.953 ***** 2025-11-01 13:58:23.953264 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:23.953276 | orchestrator | 2025-11-01 13:58:23.953288 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:58:23.953300 | orchestrator | Saturday 01 November 2025 13:58:18 +0000 (0:00:00.222) 0:00:03.176 ***** 2025-11-01 13:58:23.953311 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:23.953323 | orchestrator | 2025-11-01 13:58:23.953335 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:58:23.953346 | orchestrator | Saturday 01 November 2025 13:58:18 +0000 (0:00:00.218) 0:00:03.394 ***** 2025-11-01 13:58:23.953359 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede) 2025-11-01 13:58:23.953372 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede) 2025-11-01 13:58:23.953405 | orchestrator | 2025-11-01 13:58:23.953416 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:58:23.953427 | orchestrator | Saturday 01 November 2025 13:58:18 +0000 (0:00:00.447) 0:00:03.842 ***** 2025-11-01 13:58:23.953453 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4fee078c-1565-4ab1-bdda-b8bebdd42045) 2025-11-01 13:58:23.953465 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4fee078c-1565-4ab1-bdda-b8bebdd42045) 2025-11-01 13:58:23.953475 | orchestrator | 2025-11-01 13:58:23.953486 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:58:23.953497 | orchestrator | Saturday 01 November 2025 13:58:19 +0000 (0:00:00.725) 0:00:04.567 ***** 2025-11-01 13:58:23.953507 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c17a8236-4766-4598-abab-5d58d5ce65a6) 2025-11-01 13:58:23.953518 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c17a8236-4766-4598-abab-5d58d5ce65a6) 2025-11-01 13:58:23.953528 | orchestrator | 2025-11-01 13:58:23.953539 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:58:23.953558 | orchestrator | Saturday 01 November 2025 13:58:20 +0000 (0:00:00.743) 0:00:05.310 ***** 2025-11-01 13:58:23.953568 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7d89e604-ccfa-4ce6-abe5-76180138882d) 2025-11-01 13:58:23.953579 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7d89e604-ccfa-4ce6-abe5-76180138882d) 2025-11-01 13:58:23.953590 | orchestrator | 2025-11-01 13:58:23.953600 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:58:23.953611 | orchestrator | Saturday 01 November 2025 13:58:21 +0000 (0:00:00.956) 0:00:06.267 ***** 2025-11-01 13:58:23.953622 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-11-01 13:58:23.953632 | orchestrator | 2025-11-01 13:58:23.953643 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:58:23.953653 | orchestrator | Saturday 01 November 2025 13:58:21 +0000 (0:00:00.343) 0:00:06.611 ***** 2025-11-01 13:58:23.953664 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-11-01 13:58:23.953674 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-11-01 13:58:23.953685 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-11-01 13:58:23.953695 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-11-01 13:58:23.953706 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-11-01 13:58:23.953716 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-11-01 13:58:23.953727 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-11-01 13:58:23.953737 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-11-01 13:58:23.953748 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-11-01 13:58:23.953758 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-11-01 13:58:23.953769 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-11-01 13:58:23.953779 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-11-01 13:58:23.953789 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-11-01 13:58:23.953800 | orchestrator | 2025-11-01 13:58:23.953810 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:58:23.953821 | orchestrator | Saturday 01 November 2025 13:58:22 +0000 (0:00:00.457) 0:00:07.068 ***** 2025-11-01 13:58:23.953832 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:23.953842 | orchestrator | 2025-11-01 13:58:23.953853 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:58:23.953863 | orchestrator | Saturday 01 November 2025 13:58:22 +0000 (0:00:00.206) 0:00:07.275 ***** 2025-11-01 13:58:23.953874 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:23.953884 | orchestrator | 2025-11-01 13:58:23.953895 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:58:23.953906 | orchestrator | Saturday 01 November 2025 13:58:22 +0000 (0:00:00.228) 0:00:07.503 ***** 2025-11-01 13:58:23.953916 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:23.953927 | orchestrator | 2025-11-01 13:58:23.953937 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:58:23.953948 | orchestrator | Saturday 01 November 2025 13:58:22 +0000 (0:00:00.201) 0:00:07.705 ***** 2025-11-01 13:58:23.953958 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:23.953969 | orchestrator | 2025-11-01 13:58:23.953979 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:58:23.953998 | orchestrator | Saturday 01 November 2025 13:58:23 +0000 (0:00:00.264) 0:00:07.969 ***** 2025-11-01 13:58:23.954009 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:23.954075 | orchestrator | 2025-11-01 13:58:23.954089 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:58:23.954100 | orchestrator | Saturday 01 November 2025 13:58:23 +0000 (0:00:00.212) 0:00:08.182 ***** 2025-11-01 13:58:23.954110 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:23.954121 | orchestrator | 2025-11-01 13:58:23.954132 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:58:23.954142 | orchestrator | Saturday 01 November 2025 13:58:23 +0000 (0:00:00.209) 0:00:08.391 ***** 2025-11-01 13:58:23.954153 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:23.954164 | orchestrator | 2025-11-01 13:58:23.954174 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:58:23.954185 | orchestrator | Saturday 01 November 2025 13:58:23 +0000 (0:00:00.201) 0:00:08.593 ***** 2025-11-01 13:58:23.954203 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:32.989270 | orchestrator | 2025-11-01 13:58:32.989444 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:58:32.989472 | orchestrator | Saturday 01 November 2025 13:58:23 +0000 (0:00:00.195) 0:00:08.788 ***** 2025-11-01 13:58:32.989485 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-11-01 13:58:32.989498 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-11-01 13:58:32.989509 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-11-01 13:58:32.989520 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-11-01 13:58:32.989531 | orchestrator | 2025-11-01 13:58:32.989542 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:58:32.989553 | orchestrator | Saturday 01 November 2025 13:58:25 +0000 (0:00:01.168) 0:00:09.957 ***** 2025-11-01 13:58:32.989564 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:32.989575 | orchestrator | 2025-11-01 13:58:32.989585 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:58:32.989596 | orchestrator | Saturday 01 November 2025 13:58:25 +0000 (0:00:00.252) 0:00:10.209 ***** 2025-11-01 13:58:32.989607 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:32.989617 | orchestrator | 2025-11-01 13:58:32.989628 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:58:32.989639 | orchestrator | Saturday 01 November 2025 13:58:25 +0000 (0:00:00.198) 0:00:10.407 ***** 2025-11-01 13:58:32.989649 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:32.989660 | orchestrator | 2025-11-01 13:58:32.989671 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:58:32.989682 | orchestrator | Saturday 01 November 2025 13:58:25 +0000 (0:00:00.246) 0:00:10.653 ***** 2025-11-01 13:58:32.989692 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:32.989703 | orchestrator | 2025-11-01 13:58:32.989714 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-11-01 13:58:32.989724 | orchestrator | Saturday 01 November 2025 13:58:26 +0000 (0:00:00.228) 0:00:10.882 ***** 2025-11-01 13:58:32.989735 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:32.989746 | orchestrator | 2025-11-01 13:58:32.989756 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-11-01 13:58:32.989767 | orchestrator | Saturday 01 November 2025 13:58:26 +0000 (0:00:00.152) 0:00:11.035 ***** 2025-11-01 13:58:32.989778 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '47edfe94-e799-500a-9f78-eae255c41273'}}) 2025-11-01 13:58:32.989789 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'efff7302-70e8-5bbc-90af-2166d1a25777'}}) 2025-11-01 13:58:32.989799 | orchestrator | 2025-11-01 13:58:32.989811 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-11-01 13:58:32.989823 | orchestrator | Saturday 01 November 2025 13:58:26 +0000 (0:00:00.247) 0:00:11.282 ***** 2025-11-01 13:58:32.989862 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-47edfe94-e799-500a-9f78-eae255c41273', 'data_vg': 'ceph-47edfe94-e799-500a-9f78-eae255c41273'}) 2025-11-01 13:58:32.989874 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-efff7302-70e8-5bbc-90af-2166d1a25777', 'data_vg': 'ceph-efff7302-70e8-5bbc-90af-2166d1a25777'}) 2025-11-01 13:58:32.989886 | orchestrator | 2025-11-01 13:58:32.989915 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-11-01 13:58:32.989933 | orchestrator | Saturday 01 November 2025 13:58:28 +0000 (0:00:02.239) 0:00:13.522 ***** 2025-11-01 13:58:32.989946 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47edfe94-e799-500a-9f78-eae255c41273', 'data_vg': 'ceph-47edfe94-e799-500a-9f78-eae255c41273'})  2025-11-01 13:58:32.989959 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-efff7302-70e8-5bbc-90af-2166d1a25777', 'data_vg': 'ceph-efff7302-70e8-5bbc-90af-2166d1a25777'})  2025-11-01 13:58:32.989971 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:32.989983 | orchestrator | 2025-11-01 13:58:32.989995 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-11-01 13:58:32.990007 | orchestrator | Saturday 01 November 2025 13:58:28 +0000 (0:00:00.200) 0:00:13.722 ***** 2025-11-01 13:58:32.990076 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-47edfe94-e799-500a-9f78-eae255c41273', 'data_vg': 'ceph-47edfe94-e799-500a-9f78-eae255c41273'}) 2025-11-01 13:58:32.990092 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-efff7302-70e8-5bbc-90af-2166d1a25777', 'data_vg': 'ceph-efff7302-70e8-5bbc-90af-2166d1a25777'}) 2025-11-01 13:58:32.990104 | orchestrator | 2025-11-01 13:58:32.990116 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-11-01 13:58:32.990128 | orchestrator | Saturday 01 November 2025 13:58:30 +0000 (0:00:01.631) 0:00:15.354 ***** 2025-11-01 13:58:32.990141 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47edfe94-e799-500a-9f78-eae255c41273', 'data_vg': 'ceph-47edfe94-e799-500a-9f78-eae255c41273'})  2025-11-01 13:58:32.990154 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-efff7302-70e8-5bbc-90af-2166d1a25777', 'data_vg': 'ceph-efff7302-70e8-5bbc-90af-2166d1a25777'})  2025-11-01 13:58:32.990166 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:32.990176 | orchestrator | 2025-11-01 13:58:32.990187 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-11-01 13:58:32.990198 | orchestrator | Saturday 01 November 2025 13:58:30 +0000 (0:00:00.180) 0:00:15.534 ***** 2025-11-01 13:58:32.990209 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:32.990219 | orchestrator | 2025-11-01 13:58:32.990230 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-11-01 13:58:32.990260 | orchestrator | Saturday 01 November 2025 13:58:30 +0000 (0:00:00.208) 0:00:15.742 ***** 2025-11-01 13:58:32.990272 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47edfe94-e799-500a-9f78-eae255c41273', 'data_vg': 'ceph-47edfe94-e799-500a-9f78-eae255c41273'})  2025-11-01 13:58:32.990283 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-efff7302-70e8-5bbc-90af-2166d1a25777', 'data_vg': 'ceph-efff7302-70e8-5bbc-90af-2166d1a25777'})  2025-11-01 13:58:32.990293 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:32.990303 | orchestrator | 2025-11-01 13:58:32.990314 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-11-01 13:58:32.990324 | orchestrator | Saturday 01 November 2025 13:58:31 +0000 (0:00:00.428) 0:00:16.171 ***** 2025-11-01 13:58:32.990335 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:32.990345 | orchestrator | 2025-11-01 13:58:32.990356 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-11-01 13:58:32.990367 | orchestrator | Saturday 01 November 2025 13:58:31 +0000 (0:00:00.166) 0:00:16.338 ***** 2025-11-01 13:58:32.990396 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47edfe94-e799-500a-9f78-eae255c41273', 'data_vg': 'ceph-47edfe94-e799-500a-9f78-eae255c41273'})  2025-11-01 13:58:32.990417 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-efff7302-70e8-5bbc-90af-2166d1a25777', 'data_vg': 'ceph-efff7302-70e8-5bbc-90af-2166d1a25777'})  2025-11-01 13:58:32.990428 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:32.990439 | orchestrator | 2025-11-01 13:58:32.990449 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-11-01 13:58:32.990460 | orchestrator | Saturday 01 November 2025 13:58:31 +0000 (0:00:00.179) 0:00:16.518 ***** 2025-11-01 13:58:32.990470 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:32.990481 | orchestrator | 2025-11-01 13:58:32.990491 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-11-01 13:58:32.990502 | orchestrator | Saturday 01 November 2025 13:58:31 +0000 (0:00:00.154) 0:00:16.673 ***** 2025-11-01 13:58:32.990512 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47edfe94-e799-500a-9f78-eae255c41273', 'data_vg': 'ceph-47edfe94-e799-500a-9f78-eae255c41273'})  2025-11-01 13:58:32.990523 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-efff7302-70e8-5bbc-90af-2166d1a25777', 'data_vg': 'ceph-efff7302-70e8-5bbc-90af-2166d1a25777'})  2025-11-01 13:58:32.990534 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:32.990544 | orchestrator | 2025-11-01 13:58:32.990555 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-11-01 13:58:32.990565 | orchestrator | Saturday 01 November 2025 13:58:31 +0000 (0:00:00.156) 0:00:16.829 ***** 2025-11-01 13:58:32.990576 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:58:32.990587 | orchestrator | 2025-11-01 13:58:32.990597 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-11-01 13:58:32.990608 | orchestrator | Saturday 01 November 2025 13:58:32 +0000 (0:00:00.145) 0:00:16.975 ***** 2025-11-01 13:58:32.990624 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47edfe94-e799-500a-9f78-eae255c41273', 'data_vg': 'ceph-47edfe94-e799-500a-9f78-eae255c41273'})  2025-11-01 13:58:32.990635 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-efff7302-70e8-5bbc-90af-2166d1a25777', 'data_vg': 'ceph-efff7302-70e8-5bbc-90af-2166d1a25777'})  2025-11-01 13:58:32.990646 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:32.990657 | orchestrator | 2025-11-01 13:58:32.990667 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-11-01 13:58:32.990678 | orchestrator | Saturday 01 November 2025 13:58:32 +0000 (0:00:00.181) 0:00:17.156 ***** 2025-11-01 13:58:32.990688 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47edfe94-e799-500a-9f78-eae255c41273', 'data_vg': 'ceph-47edfe94-e799-500a-9f78-eae255c41273'})  2025-11-01 13:58:32.990699 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-efff7302-70e8-5bbc-90af-2166d1a25777', 'data_vg': 'ceph-efff7302-70e8-5bbc-90af-2166d1a25777'})  2025-11-01 13:58:32.990710 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:32.990720 | orchestrator | 2025-11-01 13:58:32.990731 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-11-01 13:58:32.990741 | orchestrator | Saturday 01 November 2025 13:58:32 +0000 (0:00:00.172) 0:00:17.328 ***** 2025-11-01 13:58:32.990752 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47edfe94-e799-500a-9f78-eae255c41273', 'data_vg': 'ceph-47edfe94-e799-500a-9f78-eae255c41273'})  2025-11-01 13:58:32.990762 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-efff7302-70e8-5bbc-90af-2166d1a25777', 'data_vg': 'ceph-efff7302-70e8-5bbc-90af-2166d1a25777'})  2025-11-01 13:58:32.990773 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:32.990784 | orchestrator | 2025-11-01 13:58:32.990794 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-11-01 13:58:32.990805 | orchestrator | Saturday 01 November 2025 13:58:32 +0000 (0:00:00.184) 0:00:17.513 ***** 2025-11-01 13:58:32.990815 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:32.990835 | orchestrator | 2025-11-01 13:58:32.990846 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-11-01 13:58:32.990857 | orchestrator | Saturday 01 November 2025 13:58:32 +0000 (0:00:00.154) 0:00:17.668 ***** 2025-11-01 13:58:32.990868 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:32.990878 | orchestrator | 2025-11-01 13:58:32.990894 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-11-01 13:58:40.325108 | orchestrator | Saturday 01 November 2025 13:58:32 +0000 (0:00:00.159) 0:00:17.827 ***** 2025-11-01 13:58:40.325208 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:40.325222 | orchestrator | 2025-11-01 13:58:40.325233 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-11-01 13:58:40.325243 | orchestrator | Saturday 01 November 2025 13:58:33 +0000 (0:00:00.148) 0:00:17.976 ***** 2025-11-01 13:58:40.325253 | orchestrator | ok: [testbed-node-3] => { 2025-11-01 13:58:40.325263 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-11-01 13:58:40.325273 | orchestrator | } 2025-11-01 13:58:40.325283 | orchestrator | 2025-11-01 13:58:40.325293 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-11-01 13:58:40.325302 | orchestrator | Saturday 01 November 2025 13:58:33 +0000 (0:00:00.390) 0:00:18.366 ***** 2025-11-01 13:58:40.325312 | orchestrator | ok: [testbed-node-3] => { 2025-11-01 13:58:40.325321 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-11-01 13:58:40.325331 | orchestrator | } 2025-11-01 13:58:40.325340 | orchestrator | 2025-11-01 13:58:40.325350 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-11-01 13:58:40.325360 | orchestrator | Saturday 01 November 2025 13:58:33 +0000 (0:00:00.171) 0:00:18.537 ***** 2025-11-01 13:58:40.325369 | orchestrator | ok: [testbed-node-3] => { 2025-11-01 13:58:40.325415 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-11-01 13:58:40.325426 | orchestrator | } 2025-11-01 13:58:40.325437 | orchestrator | 2025-11-01 13:58:40.325447 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-11-01 13:58:40.325457 | orchestrator | Saturday 01 November 2025 13:58:33 +0000 (0:00:00.148) 0:00:18.686 ***** 2025-11-01 13:58:40.325466 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:58:40.325476 | orchestrator | 2025-11-01 13:58:40.325485 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-11-01 13:58:40.325494 | orchestrator | Saturday 01 November 2025 13:58:34 +0000 (0:00:00.797) 0:00:19.483 ***** 2025-11-01 13:58:40.325504 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:58:40.325513 | orchestrator | 2025-11-01 13:58:40.325523 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-11-01 13:58:40.325532 | orchestrator | Saturday 01 November 2025 13:58:35 +0000 (0:00:00.561) 0:00:20.044 ***** 2025-11-01 13:58:40.325542 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:58:40.325551 | orchestrator | 2025-11-01 13:58:40.325561 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-11-01 13:58:40.325570 | orchestrator | Saturday 01 November 2025 13:58:35 +0000 (0:00:00.618) 0:00:20.662 ***** 2025-11-01 13:58:40.325579 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:58:40.325589 | orchestrator | 2025-11-01 13:58:40.325598 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-11-01 13:58:40.325608 | orchestrator | Saturday 01 November 2025 13:58:36 +0000 (0:00:00.213) 0:00:20.876 ***** 2025-11-01 13:58:40.325617 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:40.325627 | orchestrator | 2025-11-01 13:58:40.325636 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-11-01 13:58:40.325646 | orchestrator | Saturday 01 November 2025 13:58:36 +0000 (0:00:00.155) 0:00:21.032 ***** 2025-11-01 13:58:40.325655 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:40.325666 | orchestrator | 2025-11-01 13:58:40.325676 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-11-01 13:58:40.325687 | orchestrator | Saturday 01 November 2025 13:58:36 +0000 (0:00:00.158) 0:00:21.191 ***** 2025-11-01 13:58:40.325720 | orchestrator | ok: [testbed-node-3] => { 2025-11-01 13:58:40.325731 | orchestrator |  "vgs_report": { 2025-11-01 13:58:40.325742 | orchestrator |  "vg": [] 2025-11-01 13:58:40.325753 | orchestrator |  } 2025-11-01 13:58:40.325763 | orchestrator | } 2025-11-01 13:58:40.325773 | orchestrator | 2025-11-01 13:58:40.325784 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-11-01 13:58:40.325794 | orchestrator | Saturday 01 November 2025 13:58:36 +0000 (0:00:00.206) 0:00:21.397 ***** 2025-11-01 13:58:40.325805 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:40.325815 | orchestrator | 2025-11-01 13:58:40.325826 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-11-01 13:58:40.325836 | orchestrator | Saturday 01 November 2025 13:58:36 +0000 (0:00:00.158) 0:00:21.555 ***** 2025-11-01 13:58:40.325846 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:40.325856 | orchestrator | 2025-11-01 13:58:40.325867 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-11-01 13:58:40.325877 | orchestrator | Saturday 01 November 2025 13:58:36 +0000 (0:00:00.170) 0:00:21.725 ***** 2025-11-01 13:58:40.325888 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:40.325898 | orchestrator | 2025-11-01 13:58:40.325909 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-11-01 13:58:40.325919 | orchestrator | Saturday 01 November 2025 13:58:37 +0000 (0:00:00.432) 0:00:22.157 ***** 2025-11-01 13:58:40.325930 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:40.325940 | orchestrator | 2025-11-01 13:58:40.325950 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-11-01 13:58:40.325961 | orchestrator | Saturday 01 November 2025 13:58:37 +0000 (0:00:00.209) 0:00:22.367 ***** 2025-11-01 13:58:40.325972 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:40.325983 | orchestrator | 2025-11-01 13:58:40.326006 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-11-01 13:58:40.326062 | orchestrator | Saturday 01 November 2025 13:58:37 +0000 (0:00:00.157) 0:00:22.525 ***** 2025-11-01 13:58:40.326074 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:40.326084 | orchestrator | 2025-11-01 13:58:40.326093 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-11-01 13:58:40.326103 | orchestrator | Saturday 01 November 2025 13:58:37 +0000 (0:00:00.184) 0:00:22.710 ***** 2025-11-01 13:58:40.326112 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:40.326121 | orchestrator | 2025-11-01 13:58:40.326130 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-11-01 13:58:40.326140 | orchestrator | Saturday 01 November 2025 13:58:38 +0000 (0:00:00.173) 0:00:22.883 ***** 2025-11-01 13:58:40.326149 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:40.326158 | orchestrator | 2025-11-01 13:58:40.326167 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-11-01 13:58:40.326193 | orchestrator | Saturday 01 November 2025 13:58:38 +0000 (0:00:00.179) 0:00:23.063 ***** 2025-11-01 13:58:40.326204 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:40.326213 | orchestrator | 2025-11-01 13:58:40.326222 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-11-01 13:58:40.326232 | orchestrator | Saturday 01 November 2025 13:58:38 +0000 (0:00:00.198) 0:00:23.261 ***** 2025-11-01 13:58:40.326241 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:40.326250 | orchestrator | 2025-11-01 13:58:40.326260 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-11-01 13:58:40.326269 | orchestrator | Saturday 01 November 2025 13:58:38 +0000 (0:00:00.180) 0:00:23.442 ***** 2025-11-01 13:58:40.326279 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:40.326288 | orchestrator | 2025-11-01 13:58:40.326297 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-11-01 13:58:40.326307 | orchestrator | Saturday 01 November 2025 13:58:38 +0000 (0:00:00.175) 0:00:23.617 ***** 2025-11-01 13:58:40.326316 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:40.326325 | orchestrator | 2025-11-01 13:58:40.326342 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-11-01 13:58:40.326352 | orchestrator | Saturday 01 November 2025 13:58:38 +0000 (0:00:00.197) 0:00:23.815 ***** 2025-11-01 13:58:40.326361 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:40.326370 | orchestrator | 2025-11-01 13:58:40.326405 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-11-01 13:58:40.326416 | orchestrator | Saturday 01 November 2025 13:58:39 +0000 (0:00:00.160) 0:00:23.976 ***** 2025-11-01 13:58:40.326425 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:40.326434 | orchestrator | 2025-11-01 13:58:40.326444 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-11-01 13:58:40.326453 | orchestrator | Saturday 01 November 2025 13:58:39 +0000 (0:00:00.138) 0:00:24.114 ***** 2025-11-01 13:58:40.326464 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47edfe94-e799-500a-9f78-eae255c41273', 'data_vg': 'ceph-47edfe94-e799-500a-9f78-eae255c41273'})  2025-11-01 13:58:40.326476 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-efff7302-70e8-5bbc-90af-2166d1a25777', 'data_vg': 'ceph-efff7302-70e8-5bbc-90af-2166d1a25777'})  2025-11-01 13:58:40.326485 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:40.326494 | orchestrator | 2025-11-01 13:58:40.326504 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-11-01 13:58:40.326513 | orchestrator | Saturday 01 November 2025 13:58:39 +0000 (0:00:00.302) 0:00:24.417 ***** 2025-11-01 13:58:40.326523 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47edfe94-e799-500a-9f78-eae255c41273', 'data_vg': 'ceph-47edfe94-e799-500a-9f78-eae255c41273'})  2025-11-01 13:58:40.326532 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-efff7302-70e8-5bbc-90af-2166d1a25777', 'data_vg': 'ceph-efff7302-70e8-5bbc-90af-2166d1a25777'})  2025-11-01 13:58:40.326541 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:40.326551 | orchestrator | 2025-11-01 13:58:40.326560 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-11-01 13:58:40.326569 | orchestrator | Saturday 01 November 2025 13:58:39 +0000 (0:00:00.143) 0:00:24.560 ***** 2025-11-01 13:58:40.326584 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47edfe94-e799-500a-9f78-eae255c41273', 'data_vg': 'ceph-47edfe94-e799-500a-9f78-eae255c41273'})  2025-11-01 13:58:40.326594 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-efff7302-70e8-5bbc-90af-2166d1a25777', 'data_vg': 'ceph-efff7302-70e8-5bbc-90af-2166d1a25777'})  2025-11-01 13:58:40.326603 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:40.326613 | orchestrator | 2025-11-01 13:58:40.326622 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-11-01 13:58:40.326631 | orchestrator | Saturday 01 November 2025 13:58:39 +0000 (0:00:00.145) 0:00:24.706 ***** 2025-11-01 13:58:40.326641 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47edfe94-e799-500a-9f78-eae255c41273', 'data_vg': 'ceph-47edfe94-e799-500a-9f78-eae255c41273'})  2025-11-01 13:58:40.326650 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-efff7302-70e8-5bbc-90af-2166d1a25777', 'data_vg': 'ceph-efff7302-70e8-5bbc-90af-2166d1a25777'})  2025-11-01 13:58:40.326660 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:40.326669 | orchestrator | 2025-11-01 13:58:40.326678 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-11-01 13:58:40.326688 | orchestrator | Saturday 01 November 2025 13:58:39 +0000 (0:00:00.128) 0:00:24.834 ***** 2025-11-01 13:58:40.326697 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47edfe94-e799-500a-9f78-eae255c41273', 'data_vg': 'ceph-47edfe94-e799-500a-9f78-eae255c41273'})  2025-11-01 13:58:40.326707 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-efff7302-70e8-5bbc-90af-2166d1a25777', 'data_vg': 'ceph-efff7302-70e8-5bbc-90af-2166d1a25777'})  2025-11-01 13:58:40.326716 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:40.326738 | orchestrator | 2025-11-01 13:58:40.326748 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-11-01 13:58:40.326758 | orchestrator | Saturday 01 November 2025 13:58:40 +0000 (0:00:00.181) 0:00:25.015 ***** 2025-11-01 13:58:40.326767 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47edfe94-e799-500a-9f78-eae255c41273', 'data_vg': 'ceph-47edfe94-e799-500a-9f78-eae255c41273'})  2025-11-01 13:58:40.326782 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-efff7302-70e8-5bbc-90af-2166d1a25777', 'data_vg': 'ceph-efff7302-70e8-5bbc-90af-2166d1a25777'})  2025-11-01 13:58:46.391500 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:46.391605 | orchestrator | 2025-11-01 13:58:46.391631 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-11-01 13:58:46.391652 | orchestrator | Saturday 01 November 2025 13:58:40 +0000 (0:00:00.146) 0:00:25.162 ***** 2025-11-01 13:58:46.391671 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47edfe94-e799-500a-9f78-eae255c41273', 'data_vg': 'ceph-47edfe94-e799-500a-9f78-eae255c41273'})  2025-11-01 13:58:46.391692 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-efff7302-70e8-5bbc-90af-2166d1a25777', 'data_vg': 'ceph-efff7302-70e8-5bbc-90af-2166d1a25777'})  2025-11-01 13:58:46.391711 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:46.391729 | orchestrator | 2025-11-01 13:58:46.391747 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-11-01 13:58:46.391764 | orchestrator | Saturday 01 November 2025 13:58:40 +0000 (0:00:00.170) 0:00:25.332 ***** 2025-11-01 13:58:46.391782 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47edfe94-e799-500a-9f78-eae255c41273', 'data_vg': 'ceph-47edfe94-e799-500a-9f78-eae255c41273'})  2025-11-01 13:58:46.391798 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-efff7302-70e8-5bbc-90af-2166d1a25777', 'data_vg': 'ceph-efff7302-70e8-5bbc-90af-2166d1a25777'})  2025-11-01 13:58:46.391817 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:46.391835 | orchestrator | 2025-11-01 13:58:46.391853 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-11-01 13:58:46.391871 | orchestrator | Saturday 01 November 2025 13:58:40 +0000 (0:00:00.141) 0:00:25.474 ***** 2025-11-01 13:58:46.391890 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:58:46.391909 | orchestrator | 2025-11-01 13:58:46.391926 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-11-01 13:58:46.391943 | orchestrator | Saturday 01 November 2025 13:58:41 +0000 (0:00:00.484) 0:00:25.958 ***** 2025-11-01 13:58:46.391961 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:58:46.391979 | orchestrator | 2025-11-01 13:58:46.391997 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-11-01 13:58:46.392015 | orchestrator | Saturday 01 November 2025 13:58:41 +0000 (0:00:00.606) 0:00:26.565 ***** 2025-11-01 13:58:46.392033 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:58:46.392052 | orchestrator | 2025-11-01 13:58:46.392072 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-11-01 13:58:46.392091 | orchestrator | Saturday 01 November 2025 13:58:41 +0000 (0:00:00.167) 0:00:26.732 ***** 2025-11-01 13:58:46.392110 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-47edfe94-e799-500a-9f78-eae255c41273', 'vg_name': 'ceph-47edfe94-e799-500a-9f78-eae255c41273'}) 2025-11-01 13:58:46.392131 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-efff7302-70e8-5bbc-90af-2166d1a25777', 'vg_name': 'ceph-efff7302-70e8-5bbc-90af-2166d1a25777'}) 2025-11-01 13:58:46.392149 | orchestrator | 2025-11-01 13:58:46.392168 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-11-01 13:58:46.392188 | orchestrator | Saturday 01 November 2025 13:58:42 +0000 (0:00:00.208) 0:00:26.941 ***** 2025-11-01 13:58:46.392209 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47edfe94-e799-500a-9f78-eae255c41273', 'data_vg': 'ceph-47edfe94-e799-500a-9f78-eae255c41273'})  2025-11-01 13:58:46.392261 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-efff7302-70e8-5bbc-90af-2166d1a25777', 'data_vg': 'ceph-efff7302-70e8-5bbc-90af-2166d1a25777'})  2025-11-01 13:58:46.392284 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:46.392302 | orchestrator | 2025-11-01 13:58:46.392320 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-11-01 13:58:46.392332 | orchestrator | Saturday 01 November 2025 13:58:42 +0000 (0:00:00.442) 0:00:27.383 ***** 2025-11-01 13:58:46.392345 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47edfe94-e799-500a-9f78-eae255c41273', 'data_vg': 'ceph-47edfe94-e799-500a-9f78-eae255c41273'})  2025-11-01 13:58:46.392356 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-efff7302-70e8-5bbc-90af-2166d1a25777', 'data_vg': 'ceph-efff7302-70e8-5bbc-90af-2166d1a25777'})  2025-11-01 13:58:46.392366 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:46.392406 | orchestrator | 2025-11-01 13:58:46.392417 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-11-01 13:58:46.392428 | orchestrator | Saturday 01 November 2025 13:58:42 +0000 (0:00:00.181) 0:00:27.565 ***** 2025-11-01 13:58:46.392440 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47edfe94-e799-500a-9f78-eae255c41273', 'data_vg': 'ceph-47edfe94-e799-500a-9f78-eae255c41273'})  2025-11-01 13:58:46.392451 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-efff7302-70e8-5bbc-90af-2166d1a25777', 'data_vg': 'ceph-efff7302-70e8-5bbc-90af-2166d1a25777'})  2025-11-01 13:58:46.392461 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:58:46.392472 | orchestrator | 2025-11-01 13:58:46.392483 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-11-01 13:58:46.392493 | orchestrator | Saturday 01 November 2025 13:58:42 +0000 (0:00:00.174) 0:00:27.740 ***** 2025-11-01 13:58:46.392504 | orchestrator | ok: [testbed-node-3] => { 2025-11-01 13:58:46.392515 | orchestrator |  "lvm_report": { 2025-11-01 13:58:46.392525 | orchestrator |  "lv": [ 2025-11-01 13:58:46.392536 | orchestrator |  { 2025-11-01 13:58:46.392567 | orchestrator |  "lv_name": "osd-block-47edfe94-e799-500a-9f78-eae255c41273", 2025-11-01 13:58:46.392579 | orchestrator |  "vg_name": "ceph-47edfe94-e799-500a-9f78-eae255c41273" 2025-11-01 13:58:46.392590 | orchestrator |  }, 2025-11-01 13:58:46.392601 | orchestrator |  { 2025-11-01 13:58:46.392611 | orchestrator |  "lv_name": "osd-block-efff7302-70e8-5bbc-90af-2166d1a25777", 2025-11-01 13:58:46.392622 | orchestrator |  "vg_name": "ceph-efff7302-70e8-5bbc-90af-2166d1a25777" 2025-11-01 13:58:46.392632 | orchestrator |  } 2025-11-01 13:58:46.392643 | orchestrator |  ], 2025-11-01 13:58:46.392653 | orchestrator |  "pv": [ 2025-11-01 13:58:46.392664 | orchestrator |  { 2025-11-01 13:58:46.392674 | orchestrator |  "pv_name": "/dev/sdb", 2025-11-01 13:58:46.392685 | orchestrator |  "vg_name": "ceph-47edfe94-e799-500a-9f78-eae255c41273" 2025-11-01 13:58:46.392695 | orchestrator |  }, 2025-11-01 13:58:46.392706 | orchestrator |  { 2025-11-01 13:58:46.392716 | orchestrator |  "pv_name": "/dev/sdc", 2025-11-01 13:58:46.392727 | orchestrator |  "vg_name": "ceph-efff7302-70e8-5bbc-90af-2166d1a25777" 2025-11-01 13:58:46.392737 | orchestrator |  } 2025-11-01 13:58:46.392747 | orchestrator |  ] 2025-11-01 13:58:46.392758 | orchestrator |  } 2025-11-01 13:58:46.392769 | orchestrator | } 2025-11-01 13:58:46.392779 | orchestrator | 2025-11-01 13:58:46.392790 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-11-01 13:58:46.392800 | orchestrator | 2025-11-01 13:58:46.392811 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-01 13:58:46.392821 | orchestrator | Saturday 01 November 2025 13:58:43 +0000 (0:00:00.329) 0:00:28.069 ***** 2025-11-01 13:58:46.392832 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-11-01 13:58:46.392854 | orchestrator | 2025-11-01 13:58:46.392865 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-11-01 13:58:46.392875 | orchestrator | Saturday 01 November 2025 13:58:43 +0000 (0:00:00.267) 0:00:28.337 ***** 2025-11-01 13:58:46.392886 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:58:46.392896 | orchestrator | 2025-11-01 13:58:46.392907 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:58:46.392917 | orchestrator | Saturday 01 November 2025 13:58:43 +0000 (0:00:00.249) 0:00:28.586 ***** 2025-11-01 13:58:46.392944 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-11-01 13:58:46.392956 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-11-01 13:58:46.392966 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-11-01 13:58:46.392977 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-11-01 13:58:46.392987 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-11-01 13:58:46.392998 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-11-01 13:58:46.393008 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-11-01 13:58:46.393023 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-11-01 13:58:46.393034 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-11-01 13:58:46.393045 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-11-01 13:58:46.393056 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-11-01 13:58:46.393066 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-11-01 13:58:46.393076 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-11-01 13:58:46.393087 | orchestrator | 2025-11-01 13:58:46.393097 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:58:46.393108 | orchestrator | Saturday 01 November 2025 13:58:44 +0000 (0:00:00.448) 0:00:29.035 ***** 2025-11-01 13:58:46.393119 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:58:46.393129 | orchestrator | 2025-11-01 13:58:46.393140 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:58:46.393150 | orchestrator | Saturday 01 November 2025 13:58:44 +0000 (0:00:00.223) 0:00:29.259 ***** 2025-11-01 13:58:46.393160 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:58:46.393171 | orchestrator | 2025-11-01 13:58:46.393181 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:58:46.393192 | orchestrator | Saturday 01 November 2025 13:58:44 +0000 (0:00:00.216) 0:00:29.475 ***** 2025-11-01 13:58:46.393202 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:58:46.393212 | orchestrator | 2025-11-01 13:58:46.393223 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:58:46.393234 | orchestrator | Saturday 01 November 2025 13:58:45 +0000 (0:00:00.679) 0:00:30.155 ***** 2025-11-01 13:58:46.393244 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:58:46.393254 | orchestrator | 2025-11-01 13:58:46.393265 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:58:46.393275 | orchestrator | Saturday 01 November 2025 13:58:45 +0000 (0:00:00.246) 0:00:30.402 ***** 2025-11-01 13:58:46.393286 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:58:46.393296 | orchestrator | 2025-11-01 13:58:46.393307 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:58:46.393317 | orchestrator | Saturday 01 November 2025 13:58:45 +0000 (0:00:00.262) 0:00:30.664 ***** 2025-11-01 13:58:46.393328 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:58:46.393338 | orchestrator | 2025-11-01 13:58:46.393356 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:58:46.393367 | orchestrator | Saturday 01 November 2025 13:58:46 +0000 (0:00:00.278) 0:00:30.943 ***** 2025-11-01 13:58:46.393393 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:58:46.393405 | orchestrator | 2025-11-01 13:58:46.393423 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:58:58.763211 | orchestrator | Saturday 01 November 2025 13:58:46 +0000 (0:00:00.280) 0:00:31.223 ***** 2025-11-01 13:58:58.763307 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:58:58.763322 | orchestrator | 2025-11-01 13:58:58.763334 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:58:58.763346 | orchestrator | Saturday 01 November 2025 13:58:46 +0000 (0:00:00.222) 0:00:31.446 ***** 2025-11-01 13:58:58.763357 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad) 2025-11-01 13:58:58.763369 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad) 2025-11-01 13:58:58.763415 | orchestrator | 2025-11-01 13:58:58.763427 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:58:58.763438 | orchestrator | Saturday 01 November 2025 13:58:47 +0000 (0:00:00.594) 0:00:32.041 ***** 2025-11-01 13:58:58.763449 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_08ca9d91-9929-4ba3-9cad-ed75b64a043e) 2025-11-01 13:58:58.763460 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_08ca9d91-9929-4ba3-9cad-ed75b64a043e) 2025-11-01 13:58:58.763471 | orchestrator | 2025-11-01 13:58:58.763482 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:58:58.763492 | orchestrator | Saturday 01 November 2025 13:58:47 +0000 (0:00:00.521) 0:00:32.562 ***** 2025-11-01 13:58:58.763503 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_072d7475-b9a0-4b66-89cc-e4fcf46016ff) 2025-11-01 13:58:58.763514 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_072d7475-b9a0-4b66-89cc-e4fcf46016ff) 2025-11-01 13:58:58.763525 | orchestrator | 2025-11-01 13:58:58.763535 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:58:58.763546 | orchestrator | Saturday 01 November 2025 13:58:48 +0000 (0:00:00.558) 0:00:33.121 ***** 2025-11-01 13:58:58.763557 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d5b7cda2-7cd1-4139-8c09-f2864ed6115a) 2025-11-01 13:58:58.763568 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d5b7cda2-7cd1-4139-8c09-f2864ed6115a) 2025-11-01 13:58:58.763578 | orchestrator | 2025-11-01 13:58:58.763589 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:58:58.763600 | orchestrator | Saturday 01 November 2025 13:58:49 +0000 (0:00:00.727) 0:00:33.849 ***** 2025-11-01 13:58:58.763610 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-11-01 13:58:58.763621 | orchestrator | 2025-11-01 13:58:58.763632 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:58:58.763643 | orchestrator | Saturday 01 November 2025 13:58:49 +0000 (0:00:00.634) 0:00:34.483 ***** 2025-11-01 13:58:58.763653 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-11-01 13:58:58.763677 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-11-01 13:58:58.763688 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-11-01 13:58:58.763699 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-11-01 13:58:58.763709 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-11-01 13:58:58.763720 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-11-01 13:58:58.763730 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-11-01 13:58:58.763764 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-11-01 13:58:58.763777 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-11-01 13:58:58.763789 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-11-01 13:58:58.763802 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-11-01 13:58:58.763814 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-11-01 13:58:58.763826 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-11-01 13:58:58.763838 | orchestrator | 2025-11-01 13:58:58.763850 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:58:58.763863 | orchestrator | Saturday 01 November 2025 13:58:50 +0000 (0:00:01.073) 0:00:35.557 ***** 2025-11-01 13:58:58.763874 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:58:58.763886 | orchestrator | 2025-11-01 13:58:58.763898 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:58:58.763910 | orchestrator | Saturday 01 November 2025 13:58:50 +0000 (0:00:00.221) 0:00:35.779 ***** 2025-11-01 13:58:58.763921 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:58:58.763934 | orchestrator | 2025-11-01 13:58:58.763946 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:58:58.763958 | orchestrator | Saturday 01 November 2025 13:58:51 +0000 (0:00:00.235) 0:00:36.014 ***** 2025-11-01 13:58:58.763970 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:58:58.763982 | orchestrator | 2025-11-01 13:58:58.763994 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:58:58.764006 | orchestrator | Saturday 01 November 2025 13:58:51 +0000 (0:00:00.211) 0:00:36.226 ***** 2025-11-01 13:58:58.764018 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:58:58.764031 | orchestrator | 2025-11-01 13:58:58.764059 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:58:58.764072 | orchestrator | Saturday 01 November 2025 13:58:51 +0000 (0:00:00.253) 0:00:36.479 ***** 2025-11-01 13:58:58.764083 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:58:58.764095 | orchestrator | 2025-11-01 13:58:58.764107 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:58:58.764120 | orchestrator | Saturday 01 November 2025 13:58:51 +0000 (0:00:00.234) 0:00:36.714 ***** 2025-11-01 13:58:58.764130 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:58:58.764141 | orchestrator | 2025-11-01 13:58:58.764151 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:58:58.764162 | orchestrator | Saturday 01 November 2025 13:58:52 +0000 (0:00:00.240) 0:00:36.954 ***** 2025-11-01 13:58:58.764172 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:58:58.764183 | orchestrator | 2025-11-01 13:58:58.764194 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:58:58.764204 | orchestrator | Saturday 01 November 2025 13:58:52 +0000 (0:00:00.239) 0:00:37.194 ***** 2025-11-01 13:58:58.764215 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:58:58.764225 | orchestrator | 2025-11-01 13:58:58.764236 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:58:58.764247 | orchestrator | Saturday 01 November 2025 13:58:52 +0000 (0:00:00.219) 0:00:37.413 ***** 2025-11-01 13:58:58.764258 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-11-01 13:58:58.764268 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-11-01 13:58:58.764279 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-11-01 13:58:58.764290 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-11-01 13:58:58.764300 | orchestrator | 2025-11-01 13:58:58.764312 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:58:58.764322 | orchestrator | Saturday 01 November 2025 13:58:53 +0000 (0:00:00.928) 0:00:38.342 ***** 2025-11-01 13:58:58.764341 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:58:58.764352 | orchestrator | 2025-11-01 13:58:58.764363 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:58:58.764373 | orchestrator | Saturday 01 November 2025 13:58:53 +0000 (0:00:00.217) 0:00:38.559 ***** 2025-11-01 13:58:58.764405 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:58:58.764416 | orchestrator | 2025-11-01 13:58:58.764426 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:58:58.764437 | orchestrator | Saturday 01 November 2025 13:58:54 +0000 (0:00:00.753) 0:00:39.313 ***** 2025-11-01 13:58:58.764448 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:58:58.764458 | orchestrator | 2025-11-01 13:58:58.764469 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:58:58.764480 | orchestrator | Saturday 01 November 2025 13:58:54 +0000 (0:00:00.228) 0:00:39.542 ***** 2025-11-01 13:58:58.764491 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:58:58.764501 | orchestrator | 2025-11-01 13:58:58.764512 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-11-01 13:58:58.764523 | orchestrator | Saturday 01 November 2025 13:58:54 +0000 (0:00:00.223) 0:00:39.766 ***** 2025-11-01 13:58:58.764534 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:58:58.764544 | orchestrator | 2025-11-01 13:58:58.764555 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-11-01 13:58:58.764566 | orchestrator | Saturday 01 November 2025 13:58:55 +0000 (0:00:00.179) 0:00:39.945 ***** 2025-11-01 13:58:58.764577 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bf0a4791-ac15-5066-8808-a0a6deeb0cc9'}}) 2025-11-01 13:58:58.764588 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5630d3b4-f241-5aa8-9956-015e1822542e'}}) 2025-11-01 13:58:58.764599 | orchestrator | 2025-11-01 13:58:58.764609 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-11-01 13:58:58.764620 | orchestrator | Saturday 01 November 2025 13:58:55 +0000 (0:00:00.239) 0:00:40.184 ***** 2025-11-01 13:58:58.764632 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-bf0a4791-ac15-5066-8808-a0a6deeb0cc9', 'data_vg': 'ceph-bf0a4791-ac15-5066-8808-a0a6deeb0cc9'}) 2025-11-01 13:58:58.764644 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5630d3b4-f241-5aa8-9956-015e1822542e', 'data_vg': 'ceph-5630d3b4-f241-5aa8-9956-015e1822542e'}) 2025-11-01 13:58:58.764655 | orchestrator | 2025-11-01 13:58:58.764666 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-11-01 13:58:58.764676 | orchestrator | Saturday 01 November 2025 13:58:57 +0000 (0:00:01.943) 0:00:42.128 ***** 2025-11-01 13:58:58.764687 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bf0a4791-ac15-5066-8808-a0a6deeb0cc9', 'data_vg': 'ceph-bf0a4791-ac15-5066-8808-a0a6deeb0cc9'})  2025-11-01 13:58:58.764699 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5630d3b4-f241-5aa8-9956-015e1822542e', 'data_vg': 'ceph-5630d3b4-f241-5aa8-9956-015e1822542e'})  2025-11-01 13:58:58.764710 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:58:58.764721 | orchestrator | 2025-11-01 13:58:58.764732 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-11-01 13:58:58.764742 | orchestrator | Saturday 01 November 2025 13:58:57 +0000 (0:00:00.173) 0:00:42.301 ***** 2025-11-01 13:58:58.764753 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-bf0a4791-ac15-5066-8808-a0a6deeb0cc9', 'data_vg': 'ceph-bf0a4791-ac15-5066-8808-a0a6deeb0cc9'}) 2025-11-01 13:58:58.764764 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5630d3b4-f241-5aa8-9956-015e1822542e', 'data_vg': 'ceph-5630d3b4-f241-5aa8-9956-015e1822542e'}) 2025-11-01 13:58:58.764775 | orchestrator | 2025-11-01 13:58:58.764791 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-11-01 13:59:04.929532 | orchestrator | Saturday 01 November 2025 13:58:58 +0000 (0:00:01.294) 0:00:43.596 ***** 2025-11-01 13:59:04.929659 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bf0a4791-ac15-5066-8808-a0a6deeb0cc9', 'data_vg': 'ceph-bf0a4791-ac15-5066-8808-a0a6deeb0cc9'})  2025-11-01 13:59:04.929675 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5630d3b4-f241-5aa8-9956-015e1822542e', 'data_vg': 'ceph-5630d3b4-f241-5aa8-9956-015e1822542e'})  2025-11-01 13:59:04.929685 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:04.929697 | orchestrator | 2025-11-01 13:59:04.929708 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-11-01 13:59:04.929717 | orchestrator | Saturday 01 November 2025 13:58:58 +0000 (0:00:00.177) 0:00:43.773 ***** 2025-11-01 13:59:04.929727 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:04.929737 | orchestrator | 2025-11-01 13:59:04.929746 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-11-01 13:59:04.929757 | orchestrator | Saturday 01 November 2025 13:58:59 +0000 (0:00:00.139) 0:00:43.913 ***** 2025-11-01 13:59:04.929766 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bf0a4791-ac15-5066-8808-a0a6deeb0cc9', 'data_vg': 'ceph-bf0a4791-ac15-5066-8808-a0a6deeb0cc9'})  2025-11-01 13:59:04.929791 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5630d3b4-f241-5aa8-9956-015e1822542e', 'data_vg': 'ceph-5630d3b4-f241-5aa8-9956-015e1822542e'})  2025-11-01 13:59:04.929801 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:04.929811 | orchestrator | 2025-11-01 13:59:04.929820 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-11-01 13:59:04.929830 | orchestrator | Saturday 01 November 2025 13:58:59 +0000 (0:00:00.178) 0:00:44.091 ***** 2025-11-01 13:59:04.929840 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:04.929849 | orchestrator | 2025-11-01 13:59:04.929859 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-11-01 13:59:04.929868 | orchestrator | Saturday 01 November 2025 13:58:59 +0000 (0:00:00.144) 0:00:44.236 ***** 2025-11-01 13:59:04.929878 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bf0a4791-ac15-5066-8808-a0a6deeb0cc9', 'data_vg': 'ceph-bf0a4791-ac15-5066-8808-a0a6deeb0cc9'})  2025-11-01 13:59:04.929888 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5630d3b4-f241-5aa8-9956-015e1822542e', 'data_vg': 'ceph-5630d3b4-f241-5aa8-9956-015e1822542e'})  2025-11-01 13:59:04.929897 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:04.929907 | orchestrator | 2025-11-01 13:59:04.929917 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-11-01 13:59:04.929926 | orchestrator | Saturday 01 November 2025 13:58:59 +0000 (0:00:00.398) 0:00:44.635 ***** 2025-11-01 13:59:04.929940 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:04.929950 | orchestrator | 2025-11-01 13:59:04.929960 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-11-01 13:59:04.929970 | orchestrator | Saturday 01 November 2025 13:58:59 +0000 (0:00:00.168) 0:00:44.804 ***** 2025-11-01 13:59:04.929979 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bf0a4791-ac15-5066-8808-a0a6deeb0cc9', 'data_vg': 'ceph-bf0a4791-ac15-5066-8808-a0a6deeb0cc9'})  2025-11-01 13:59:04.929989 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5630d3b4-f241-5aa8-9956-015e1822542e', 'data_vg': 'ceph-5630d3b4-f241-5aa8-9956-015e1822542e'})  2025-11-01 13:59:04.929999 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:04.930008 | orchestrator | 2025-11-01 13:59:04.930069 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-11-01 13:59:04.930081 | orchestrator | Saturday 01 November 2025 13:59:00 +0000 (0:00:00.162) 0:00:44.966 ***** 2025-11-01 13:59:04.930093 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:59:04.930105 | orchestrator | 2025-11-01 13:59:04.930117 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-11-01 13:59:04.930129 | orchestrator | Saturday 01 November 2025 13:59:00 +0000 (0:00:00.193) 0:00:45.159 ***** 2025-11-01 13:59:04.930149 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bf0a4791-ac15-5066-8808-a0a6deeb0cc9', 'data_vg': 'ceph-bf0a4791-ac15-5066-8808-a0a6deeb0cc9'})  2025-11-01 13:59:04.930161 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5630d3b4-f241-5aa8-9956-015e1822542e', 'data_vg': 'ceph-5630d3b4-f241-5aa8-9956-015e1822542e'})  2025-11-01 13:59:04.930173 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:04.930184 | orchestrator | 2025-11-01 13:59:04.930196 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-11-01 13:59:04.930207 | orchestrator | Saturday 01 November 2025 13:59:00 +0000 (0:00:00.206) 0:00:45.366 ***** 2025-11-01 13:59:04.930219 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bf0a4791-ac15-5066-8808-a0a6deeb0cc9', 'data_vg': 'ceph-bf0a4791-ac15-5066-8808-a0a6deeb0cc9'})  2025-11-01 13:59:04.930231 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5630d3b4-f241-5aa8-9956-015e1822542e', 'data_vg': 'ceph-5630d3b4-f241-5aa8-9956-015e1822542e'})  2025-11-01 13:59:04.930243 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:04.930254 | orchestrator | 2025-11-01 13:59:04.930266 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-11-01 13:59:04.930278 | orchestrator | Saturday 01 November 2025 13:59:00 +0000 (0:00:00.217) 0:00:45.583 ***** 2025-11-01 13:59:04.930306 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bf0a4791-ac15-5066-8808-a0a6deeb0cc9', 'data_vg': 'ceph-bf0a4791-ac15-5066-8808-a0a6deeb0cc9'})  2025-11-01 13:59:04.930319 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5630d3b4-f241-5aa8-9956-015e1822542e', 'data_vg': 'ceph-5630d3b4-f241-5aa8-9956-015e1822542e'})  2025-11-01 13:59:04.930330 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:04.930341 | orchestrator | 2025-11-01 13:59:04.930352 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-11-01 13:59:04.930363 | orchestrator | Saturday 01 November 2025 13:59:00 +0000 (0:00:00.163) 0:00:45.747 ***** 2025-11-01 13:59:04.930374 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:04.930408 | orchestrator | 2025-11-01 13:59:04.930419 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-11-01 13:59:04.930431 | orchestrator | Saturday 01 November 2025 13:59:01 +0000 (0:00:00.184) 0:00:45.932 ***** 2025-11-01 13:59:04.930441 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:04.930450 | orchestrator | 2025-11-01 13:59:04.930460 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-11-01 13:59:04.930470 | orchestrator | Saturday 01 November 2025 13:59:01 +0000 (0:00:00.152) 0:00:46.084 ***** 2025-11-01 13:59:04.930479 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:04.930489 | orchestrator | 2025-11-01 13:59:04.930499 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-11-01 13:59:04.930508 | orchestrator | Saturday 01 November 2025 13:59:01 +0000 (0:00:00.149) 0:00:46.234 ***** 2025-11-01 13:59:04.930518 | orchestrator | ok: [testbed-node-4] => { 2025-11-01 13:59:04.930528 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-11-01 13:59:04.930538 | orchestrator | } 2025-11-01 13:59:04.930548 | orchestrator | 2025-11-01 13:59:04.930557 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-11-01 13:59:04.930567 | orchestrator | Saturday 01 November 2025 13:59:01 +0000 (0:00:00.195) 0:00:46.429 ***** 2025-11-01 13:59:04.930577 | orchestrator | ok: [testbed-node-4] => { 2025-11-01 13:59:04.930586 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-11-01 13:59:04.930596 | orchestrator | } 2025-11-01 13:59:04.930605 | orchestrator | 2025-11-01 13:59:04.930615 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-11-01 13:59:04.930625 | orchestrator | Saturday 01 November 2025 13:59:01 +0000 (0:00:00.154) 0:00:46.584 ***** 2025-11-01 13:59:04.930634 | orchestrator | ok: [testbed-node-4] => { 2025-11-01 13:59:04.930644 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-11-01 13:59:04.930661 | orchestrator | } 2025-11-01 13:59:04.930670 | orchestrator | 2025-11-01 13:59:04.930680 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-11-01 13:59:04.930690 | orchestrator | Saturday 01 November 2025 13:59:02 +0000 (0:00:00.400) 0:00:46.984 ***** 2025-11-01 13:59:04.930700 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:59:04.930709 | orchestrator | 2025-11-01 13:59:04.930719 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-11-01 13:59:04.930729 | orchestrator | Saturday 01 November 2025 13:59:02 +0000 (0:00:00.560) 0:00:47.545 ***** 2025-11-01 13:59:04.930743 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:59:04.930753 | orchestrator | 2025-11-01 13:59:04.930763 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-11-01 13:59:04.930773 | orchestrator | Saturday 01 November 2025 13:59:03 +0000 (0:00:00.529) 0:00:48.075 ***** 2025-11-01 13:59:04.930783 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:59:04.930792 | orchestrator | 2025-11-01 13:59:04.930802 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-11-01 13:59:04.930812 | orchestrator | Saturday 01 November 2025 13:59:03 +0000 (0:00:00.538) 0:00:48.613 ***** 2025-11-01 13:59:04.930821 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:59:04.930831 | orchestrator | 2025-11-01 13:59:04.930841 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-11-01 13:59:04.930850 | orchestrator | Saturday 01 November 2025 13:59:03 +0000 (0:00:00.152) 0:00:48.765 ***** 2025-11-01 13:59:04.930860 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:04.930870 | orchestrator | 2025-11-01 13:59:04.930879 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-11-01 13:59:04.930889 | orchestrator | Saturday 01 November 2025 13:59:04 +0000 (0:00:00.123) 0:00:48.889 ***** 2025-11-01 13:59:04.930899 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:04.930908 | orchestrator | 2025-11-01 13:59:04.930918 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-11-01 13:59:04.930928 | orchestrator | Saturday 01 November 2025 13:59:04 +0000 (0:00:00.120) 0:00:49.010 ***** 2025-11-01 13:59:04.930937 | orchestrator | ok: [testbed-node-4] => { 2025-11-01 13:59:04.930947 | orchestrator |  "vgs_report": { 2025-11-01 13:59:04.930957 | orchestrator |  "vg": [] 2025-11-01 13:59:04.930967 | orchestrator |  } 2025-11-01 13:59:04.930976 | orchestrator | } 2025-11-01 13:59:04.930986 | orchestrator | 2025-11-01 13:59:04.930996 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-11-01 13:59:04.931005 | orchestrator | Saturday 01 November 2025 13:59:04 +0000 (0:00:00.161) 0:00:49.171 ***** 2025-11-01 13:59:04.931015 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:04.931025 | orchestrator | 2025-11-01 13:59:04.931034 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-11-01 13:59:04.931044 | orchestrator | Saturday 01 November 2025 13:59:04 +0000 (0:00:00.151) 0:00:49.322 ***** 2025-11-01 13:59:04.931054 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:04.931063 | orchestrator | 2025-11-01 13:59:04.931073 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-11-01 13:59:04.931083 | orchestrator | Saturday 01 November 2025 13:59:04 +0000 (0:00:00.142) 0:00:49.465 ***** 2025-11-01 13:59:04.931092 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:04.931102 | orchestrator | 2025-11-01 13:59:04.931112 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-11-01 13:59:04.931121 | orchestrator | Saturday 01 November 2025 13:59:04 +0000 (0:00:00.142) 0:00:49.608 ***** 2025-11-01 13:59:04.931131 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:04.931141 | orchestrator | 2025-11-01 13:59:04.931151 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-11-01 13:59:04.931166 | orchestrator | Saturday 01 November 2025 13:59:04 +0000 (0:00:00.156) 0:00:49.764 ***** 2025-11-01 13:59:10.138059 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:10.138150 | orchestrator | 2025-11-01 13:59:10.138184 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-11-01 13:59:10.138197 | orchestrator | Saturday 01 November 2025 13:59:05 +0000 (0:00:00.394) 0:00:50.159 ***** 2025-11-01 13:59:10.138208 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:10.138219 | orchestrator | 2025-11-01 13:59:10.138230 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-11-01 13:59:10.138240 | orchestrator | Saturday 01 November 2025 13:59:05 +0000 (0:00:00.153) 0:00:50.313 ***** 2025-11-01 13:59:10.138251 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:10.138262 | orchestrator | 2025-11-01 13:59:10.138272 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-11-01 13:59:10.138283 | orchestrator | Saturday 01 November 2025 13:59:05 +0000 (0:00:00.134) 0:00:50.448 ***** 2025-11-01 13:59:10.138294 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:10.138304 | orchestrator | 2025-11-01 13:59:10.138315 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-11-01 13:59:10.138326 | orchestrator | Saturday 01 November 2025 13:59:05 +0000 (0:00:00.146) 0:00:50.594 ***** 2025-11-01 13:59:10.138336 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:10.138347 | orchestrator | 2025-11-01 13:59:10.138358 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-11-01 13:59:10.138368 | orchestrator | Saturday 01 November 2025 13:59:05 +0000 (0:00:00.147) 0:00:50.741 ***** 2025-11-01 13:59:10.138423 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:10.138436 | orchestrator | 2025-11-01 13:59:10.138446 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-11-01 13:59:10.138457 | orchestrator | Saturday 01 November 2025 13:59:06 +0000 (0:00:00.148) 0:00:50.890 ***** 2025-11-01 13:59:10.138467 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:10.138478 | orchestrator | 2025-11-01 13:59:10.138489 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-11-01 13:59:10.138499 | orchestrator | Saturday 01 November 2025 13:59:06 +0000 (0:00:00.148) 0:00:51.038 ***** 2025-11-01 13:59:10.138510 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:10.138521 | orchestrator | 2025-11-01 13:59:10.138531 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-11-01 13:59:10.138542 | orchestrator | Saturday 01 November 2025 13:59:06 +0000 (0:00:00.158) 0:00:51.197 ***** 2025-11-01 13:59:10.138553 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:10.138564 | orchestrator | 2025-11-01 13:59:10.138576 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-11-01 13:59:10.138588 | orchestrator | Saturday 01 November 2025 13:59:06 +0000 (0:00:00.161) 0:00:51.358 ***** 2025-11-01 13:59:10.138600 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:10.138612 | orchestrator | 2025-11-01 13:59:10.138624 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-11-01 13:59:10.138635 | orchestrator | Saturday 01 November 2025 13:59:06 +0000 (0:00:00.143) 0:00:51.501 ***** 2025-11-01 13:59:10.138661 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bf0a4791-ac15-5066-8808-a0a6deeb0cc9', 'data_vg': 'ceph-bf0a4791-ac15-5066-8808-a0a6deeb0cc9'})  2025-11-01 13:59:10.138676 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5630d3b4-f241-5aa8-9956-015e1822542e', 'data_vg': 'ceph-5630d3b4-f241-5aa8-9956-015e1822542e'})  2025-11-01 13:59:10.138688 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:10.138700 | orchestrator | 2025-11-01 13:59:10.138711 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-11-01 13:59:10.138722 | orchestrator | Saturday 01 November 2025 13:59:06 +0000 (0:00:00.180) 0:00:51.681 ***** 2025-11-01 13:59:10.138733 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bf0a4791-ac15-5066-8808-a0a6deeb0cc9', 'data_vg': 'ceph-bf0a4791-ac15-5066-8808-a0a6deeb0cc9'})  2025-11-01 13:59:10.138743 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5630d3b4-f241-5aa8-9956-015e1822542e', 'data_vg': 'ceph-5630d3b4-f241-5aa8-9956-015e1822542e'})  2025-11-01 13:59:10.138762 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:10.138773 | orchestrator | 2025-11-01 13:59:10.138783 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-11-01 13:59:10.138794 | orchestrator | Saturday 01 November 2025 13:59:07 +0000 (0:00:00.171) 0:00:51.853 ***** 2025-11-01 13:59:10.138805 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bf0a4791-ac15-5066-8808-a0a6deeb0cc9', 'data_vg': 'ceph-bf0a4791-ac15-5066-8808-a0a6deeb0cc9'})  2025-11-01 13:59:10.138815 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5630d3b4-f241-5aa8-9956-015e1822542e', 'data_vg': 'ceph-5630d3b4-f241-5aa8-9956-015e1822542e'})  2025-11-01 13:59:10.138826 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:10.138836 | orchestrator | 2025-11-01 13:59:10.138847 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-11-01 13:59:10.138857 | orchestrator | Saturday 01 November 2025 13:59:07 +0000 (0:00:00.433) 0:00:52.287 ***** 2025-11-01 13:59:10.138868 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bf0a4791-ac15-5066-8808-a0a6deeb0cc9', 'data_vg': 'ceph-bf0a4791-ac15-5066-8808-a0a6deeb0cc9'})  2025-11-01 13:59:10.138879 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5630d3b4-f241-5aa8-9956-015e1822542e', 'data_vg': 'ceph-5630d3b4-f241-5aa8-9956-015e1822542e'})  2025-11-01 13:59:10.138889 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:10.138900 | orchestrator | 2025-11-01 13:59:10.138911 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-11-01 13:59:10.138936 | orchestrator | Saturday 01 November 2025 13:59:07 +0000 (0:00:00.162) 0:00:52.449 ***** 2025-11-01 13:59:10.138948 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bf0a4791-ac15-5066-8808-a0a6deeb0cc9', 'data_vg': 'ceph-bf0a4791-ac15-5066-8808-a0a6deeb0cc9'})  2025-11-01 13:59:10.138959 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5630d3b4-f241-5aa8-9956-015e1822542e', 'data_vg': 'ceph-5630d3b4-f241-5aa8-9956-015e1822542e'})  2025-11-01 13:59:10.138970 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:10.138980 | orchestrator | 2025-11-01 13:59:10.138991 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-11-01 13:59:10.139002 | orchestrator | Saturday 01 November 2025 13:59:07 +0000 (0:00:00.185) 0:00:52.634 ***** 2025-11-01 13:59:10.139012 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bf0a4791-ac15-5066-8808-a0a6deeb0cc9', 'data_vg': 'ceph-bf0a4791-ac15-5066-8808-a0a6deeb0cc9'})  2025-11-01 13:59:10.139023 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5630d3b4-f241-5aa8-9956-015e1822542e', 'data_vg': 'ceph-5630d3b4-f241-5aa8-9956-015e1822542e'})  2025-11-01 13:59:10.139033 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:10.139045 | orchestrator | 2025-11-01 13:59:10.139056 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-11-01 13:59:10.139066 | orchestrator | Saturday 01 November 2025 13:59:07 +0000 (0:00:00.189) 0:00:52.824 ***** 2025-11-01 13:59:10.139077 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bf0a4791-ac15-5066-8808-a0a6deeb0cc9', 'data_vg': 'ceph-bf0a4791-ac15-5066-8808-a0a6deeb0cc9'})  2025-11-01 13:59:10.139088 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5630d3b4-f241-5aa8-9956-015e1822542e', 'data_vg': 'ceph-5630d3b4-f241-5aa8-9956-015e1822542e'})  2025-11-01 13:59:10.139098 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:10.139109 | orchestrator | 2025-11-01 13:59:10.139120 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-11-01 13:59:10.139130 | orchestrator | Saturday 01 November 2025 13:59:08 +0000 (0:00:00.178) 0:00:53.003 ***** 2025-11-01 13:59:10.139141 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bf0a4791-ac15-5066-8808-a0a6deeb0cc9', 'data_vg': 'ceph-bf0a4791-ac15-5066-8808-a0a6deeb0cc9'})  2025-11-01 13:59:10.139158 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5630d3b4-f241-5aa8-9956-015e1822542e', 'data_vg': 'ceph-5630d3b4-f241-5aa8-9956-015e1822542e'})  2025-11-01 13:59:10.139169 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:10.139179 | orchestrator | 2025-11-01 13:59:10.139190 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-11-01 13:59:10.139237 | orchestrator | Saturday 01 November 2025 13:59:08 +0000 (0:00:00.151) 0:00:53.155 ***** 2025-11-01 13:59:10.139249 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:59:10.139260 | orchestrator | 2025-11-01 13:59:10.139271 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-11-01 13:59:10.139281 | orchestrator | Saturday 01 November 2025 13:59:08 +0000 (0:00:00.530) 0:00:53.685 ***** 2025-11-01 13:59:10.139292 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:59:10.139303 | orchestrator | 2025-11-01 13:59:10.139313 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-11-01 13:59:10.139324 | orchestrator | Saturday 01 November 2025 13:59:09 +0000 (0:00:00.579) 0:00:54.265 ***** 2025-11-01 13:59:10.139335 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:59:10.139345 | orchestrator | 2025-11-01 13:59:10.139356 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-11-01 13:59:10.139366 | orchestrator | Saturday 01 November 2025 13:59:09 +0000 (0:00:00.172) 0:00:54.437 ***** 2025-11-01 13:59:10.139377 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-5630d3b4-f241-5aa8-9956-015e1822542e', 'vg_name': 'ceph-5630d3b4-f241-5aa8-9956-015e1822542e'}) 2025-11-01 13:59:10.139409 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-bf0a4791-ac15-5066-8808-a0a6deeb0cc9', 'vg_name': 'ceph-bf0a4791-ac15-5066-8808-a0a6deeb0cc9'}) 2025-11-01 13:59:10.139420 | orchestrator | 2025-11-01 13:59:10.139431 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-11-01 13:59:10.139442 | orchestrator | Saturday 01 November 2025 13:59:09 +0000 (0:00:00.177) 0:00:54.615 ***** 2025-11-01 13:59:10.139452 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bf0a4791-ac15-5066-8808-a0a6deeb0cc9', 'data_vg': 'ceph-bf0a4791-ac15-5066-8808-a0a6deeb0cc9'})  2025-11-01 13:59:10.139463 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5630d3b4-f241-5aa8-9956-015e1822542e', 'data_vg': 'ceph-5630d3b4-f241-5aa8-9956-015e1822542e'})  2025-11-01 13:59:10.139474 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:10.139484 | orchestrator | 2025-11-01 13:59:10.139495 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-11-01 13:59:10.139506 | orchestrator | Saturday 01 November 2025 13:59:09 +0000 (0:00:00.176) 0:00:54.791 ***** 2025-11-01 13:59:10.139516 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bf0a4791-ac15-5066-8808-a0a6deeb0cc9', 'data_vg': 'ceph-bf0a4791-ac15-5066-8808-a0a6deeb0cc9'})  2025-11-01 13:59:10.139527 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5630d3b4-f241-5aa8-9956-015e1822542e', 'data_vg': 'ceph-5630d3b4-f241-5aa8-9956-015e1822542e'})  2025-11-01 13:59:10.139545 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:16.746990 | orchestrator | 2025-11-01 13:59:16.747100 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-11-01 13:59:16.747118 | orchestrator | Saturday 01 November 2025 13:59:10 +0000 (0:00:00.181) 0:00:54.973 ***** 2025-11-01 13:59:16.747131 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bf0a4791-ac15-5066-8808-a0a6deeb0cc9', 'data_vg': 'ceph-bf0a4791-ac15-5066-8808-a0a6deeb0cc9'})  2025-11-01 13:59:16.747145 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5630d3b4-f241-5aa8-9956-015e1822542e', 'data_vg': 'ceph-5630d3b4-f241-5aa8-9956-015e1822542e'})  2025-11-01 13:59:16.747156 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:59:16.747168 | orchestrator | 2025-11-01 13:59:16.747180 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-11-01 13:59:16.747190 | orchestrator | Saturday 01 November 2025 13:59:10 +0000 (0:00:00.180) 0:00:55.153 ***** 2025-11-01 13:59:16.747224 | orchestrator | ok: [testbed-node-4] => { 2025-11-01 13:59:16.747236 | orchestrator |  "lvm_report": { 2025-11-01 13:59:16.747248 | orchestrator |  "lv": [ 2025-11-01 13:59:16.747259 | orchestrator |  { 2025-11-01 13:59:16.747270 | orchestrator |  "lv_name": "osd-block-5630d3b4-f241-5aa8-9956-015e1822542e", 2025-11-01 13:59:16.747281 | orchestrator |  "vg_name": "ceph-5630d3b4-f241-5aa8-9956-015e1822542e" 2025-11-01 13:59:16.747292 | orchestrator |  }, 2025-11-01 13:59:16.747302 | orchestrator |  { 2025-11-01 13:59:16.747313 | orchestrator |  "lv_name": "osd-block-bf0a4791-ac15-5066-8808-a0a6deeb0cc9", 2025-11-01 13:59:16.747324 | orchestrator |  "vg_name": "ceph-bf0a4791-ac15-5066-8808-a0a6deeb0cc9" 2025-11-01 13:59:16.747334 | orchestrator |  } 2025-11-01 13:59:16.747345 | orchestrator |  ], 2025-11-01 13:59:16.747355 | orchestrator |  "pv": [ 2025-11-01 13:59:16.747365 | orchestrator |  { 2025-11-01 13:59:16.747376 | orchestrator |  "pv_name": "/dev/sdb", 2025-11-01 13:59:16.747435 | orchestrator |  "vg_name": "ceph-bf0a4791-ac15-5066-8808-a0a6deeb0cc9" 2025-11-01 13:59:16.747446 | orchestrator |  }, 2025-11-01 13:59:16.747456 | orchestrator |  { 2025-11-01 13:59:16.747467 | orchestrator |  "pv_name": "/dev/sdc", 2025-11-01 13:59:16.747478 | orchestrator |  "vg_name": "ceph-5630d3b4-f241-5aa8-9956-015e1822542e" 2025-11-01 13:59:16.747488 | orchestrator |  } 2025-11-01 13:59:16.747499 | orchestrator |  ] 2025-11-01 13:59:16.747510 | orchestrator |  } 2025-11-01 13:59:16.747520 | orchestrator | } 2025-11-01 13:59:16.747531 | orchestrator | 2025-11-01 13:59:16.747542 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-11-01 13:59:16.747553 | orchestrator | 2025-11-01 13:59:16.747563 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-01 13:59:16.747574 | orchestrator | Saturday 01 November 2025 13:59:10 +0000 (0:00:00.544) 0:00:55.698 ***** 2025-11-01 13:59:16.747585 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-11-01 13:59:16.747596 | orchestrator | 2025-11-01 13:59:16.747620 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-11-01 13:59:16.747632 | orchestrator | Saturday 01 November 2025 13:59:11 +0000 (0:00:00.294) 0:00:55.992 ***** 2025-11-01 13:59:16.747643 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:59:16.747655 | orchestrator | 2025-11-01 13:59:16.747665 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:59:16.747676 | orchestrator | Saturday 01 November 2025 13:59:11 +0000 (0:00:00.257) 0:00:56.250 ***** 2025-11-01 13:59:16.747687 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-11-01 13:59:16.747697 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-11-01 13:59:16.747708 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-11-01 13:59:16.747718 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-11-01 13:59:16.747729 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-11-01 13:59:16.747740 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-11-01 13:59:16.747751 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-11-01 13:59:16.747761 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-11-01 13:59:16.747772 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-11-01 13:59:16.747783 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-11-01 13:59:16.747793 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-11-01 13:59:16.747813 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-11-01 13:59:16.747824 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-11-01 13:59:16.747834 | orchestrator | 2025-11-01 13:59:16.747845 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:59:16.747855 | orchestrator | Saturday 01 November 2025 13:59:11 +0000 (0:00:00.448) 0:00:56.699 ***** 2025-11-01 13:59:16.747866 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:16.747881 | orchestrator | 2025-11-01 13:59:16.747892 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:59:16.747903 | orchestrator | Saturday 01 November 2025 13:59:12 +0000 (0:00:00.238) 0:00:56.937 ***** 2025-11-01 13:59:16.747914 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:16.747924 | orchestrator | 2025-11-01 13:59:16.747935 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:59:16.747964 | orchestrator | Saturday 01 November 2025 13:59:12 +0000 (0:00:00.207) 0:00:57.145 ***** 2025-11-01 13:59:16.747975 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:16.747986 | orchestrator | 2025-11-01 13:59:16.747997 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:59:16.748007 | orchestrator | Saturday 01 November 2025 13:59:12 +0000 (0:00:00.221) 0:00:57.367 ***** 2025-11-01 13:59:16.748018 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:16.748028 | orchestrator | 2025-11-01 13:59:16.748039 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:59:16.748050 | orchestrator | Saturday 01 November 2025 13:59:12 +0000 (0:00:00.224) 0:00:57.591 ***** 2025-11-01 13:59:16.748060 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:16.748071 | orchestrator | 2025-11-01 13:59:16.748082 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:59:16.748092 | orchestrator | Saturday 01 November 2025 13:59:13 +0000 (0:00:00.685) 0:00:58.276 ***** 2025-11-01 13:59:16.748103 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:16.748113 | orchestrator | 2025-11-01 13:59:16.748124 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:59:16.748135 | orchestrator | Saturday 01 November 2025 13:59:13 +0000 (0:00:00.220) 0:00:58.497 ***** 2025-11-01 13:59:16.748145 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:16.748156 | orchestrator | 2025-11-01 13:59:16.748166 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:59:16.748177 | orchestrator | Saturday 01 November 2025 13:59:13 +0000 (0:00:00.224) 0:00:58.721 ***** 2025-11-01 13:59:16.748187 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:16.748198 | orchestrator | 2025-11-01 13:59:16.748208 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:59:16.748219 | orchestrator | Saturday 01 November 2025 13:59:14 +0000 (0:00:00.215) 0:00:58.937 ***** 2025-11-01 13:59:16.748229 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3) 2025-11-01 13:59:16.748241 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3) 2025-11-01 13:59:16.748252 | orchestrator | 2025-11-01 13:59:16.748262 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:59:16.748273 | orchestrator | Saturday 01 November 2025 13:59:14 +0000 (0:00:00.442) 0:00:59.379 ***** 2025-11-01 13:59:16.748283 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_dbba508b-4e10-452f-8431-011284f42e7d) 2025-11-01 13:59:16.748294 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_dbba508b-4e10-452f-8431-011284f42e7d) 2025-11-01 13:59:16.748305 | orchestrator | 2025-11-01 13:59:16.748315 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:59:16.748326 | orchestrator | Saturday 01 November 2025 13:59:14 +0000 (0:00:00.452) 0:00:59.832 ***** 2025-11-01 13:59:16.748348 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f57a5620-543a-43ae-a22d-8a42cad6fb24) 2025-11-01 13:59:16.748360 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f57a5620-543a-43ae-a22d-8a42cad6fb24) 2025-11-01 13:59:16.748370 | orchestrator | 2025-11-01 13:59:16.748400 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:59:16.748411 | orchestrator | Saturday 01 November 2025 13:59:15 +0000 (0:00:00.446) 0:01:00.279 ***** 2025-11-01 13:59:16.748422 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c347dc72-435c-43d5-a9cf-2c60f1de142e) 2025-11-01 13:59:16.748433 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c347dc72-435c-43d5-a9cf-2c60f1de142e) 2025-11-01 13:59:16.748443 | orchestrator | 2025-11-01 13:59:16.748454 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 13:59:16.748464 | orchestrator | Saturday 01 November 2025 13:59:15 +0000 (0:00:00.462) 0:01:00.741 ***** 2025-11-01 13:59:16.748475 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-11-01 13:59:16.748485 | orchestrator | 2025-11-01 13:59:16.748496 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:59:16.748506 | orchestrator | Saturday 01 November 2025 13:59:16 +0000 (0:00:00.339) 0:01:01.081 ***** 2025-11-01 13:59:16.748517 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-11-01 13:59:16.748527 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-11-01 13:59:16.748538 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-11-01 13:59:16.748548 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-11-01 13:59:16.748559 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-11-01 13:59:16.748569 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-11-01 13:59:16.748580 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-11-01 13:59:16.748590 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-11-01 13:59:16.748601 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-11-01 13:59:16.748611 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-11-01 13:59:16.748622 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-11-01 13:59:16.748639 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-11-01 13:59:26.919096 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-11-01 13:59:26.919201 | orchestrator | 2025-11-01 13:59:26.919218 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:59:26.919231 | orchestrator | Saturday 01 November 2025 13:59:16 +0000 (0:00:00.488) 0:01:01.570 ***** 2025-11-01 13:59:26.919243 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:26.919255 | orchestrator | 2025-11-01 13:59:26.919266 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:59:26.919276 | orchestrator | Saturday 01 November 2025 13:59:16 +0000 (0:00:00.198) 0:01:01.768 ***** 2025-11-01 13:59:26.919287 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:26.919298 | orchestrator | 2025-11-01 13:59:26.919309 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:59:26.919319 | orchestrator | Saturday 01 November 2025 13:59:17 +0000 (0:00:00.886) 0:01:02.654 ***** 2025-11-01 13:59:26.919330 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:26.919341 | orchestrator | 2025-11-01 13:59:26.919352 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:59:26.919423 | orchestrator | Saturday 01 November 2025 13:59:18 +0000 (0:00:00.267) 0:01:02.922 ***** 2025-11-01 13:59:26.919436 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:26.919447 | orchestrator | 2025-11-01 13:59:26.919458 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:59:26.919468 | orchestrator | Saturday 01 November 2025 13:59:18 +0000 (0:00:00.277) 0:01:03.199 ***** 2025-11-01 13:59:26.919479 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:26.919489 | orchestrator | 2025-11-01 13:59:26.919500 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:59:26.919511 | orchestrator | Saturday 01 November 2025 13:59:18 +0000 (0:00:00.248) 0:01:03.448 ***** 2025-11-01 13:59:26.919521 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:26.919532 | orchestrator | 2025-11-01 13:59:26.919542 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:59:26.919553 | orchestrator | Saturday 01 November 2025 13:59:18 +0000 (0:00:00.224) 0:01:03.673 ***** 2025-11-01 13:59:26.919564 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:26.919574 | orchestrator | 2025-11-01 13:59:26.919585 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:59:26.919595 | orchestrator | Saturday 01 November 2025 13:59:19 +0000 (0:00:00.260) 0:01:03.933 ***** 2025-11-01 13:59:26.919606 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:26.919616 | orchestrator | 2025-11-01 13:59:26.919627 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:59:26.919638 | orchestrator | Saturday 01 November 2025 13:59:19 +0000 (0:00:00.208) 0:01:04.142 ***** 2025-11-01 13:59:26.919650 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-11-01 13:59:26.919663 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-11-01 13:59:26.919675 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-11-01 13:59:26.919687 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-11-01 13:59:26.919698 | orchestrator | 2025-11-01 13:59:26.919711 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:59:26.919723 | orchestrator | Saturday 01 November 2025 13:59:20 +0000 (0:00:00.852) 0:01:04.994 ***** 2025-11-01 13:59:26.919735 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:26.919747 | orchestrator | 2025-11-01 13:59:26.919758 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:59:26.919770 | orchestrator | Saturday 01 November 2025 13:59:20 +0000 (0:00:00.249) 0:01:05.244 ***** 2025-11-01 13:59:26.919782 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:26.919793 | orchestrator | 2025-11-01 13:59:26.919806 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:59:26.919818 | orchestrator | Saturday 01 November 2025 13:59:20 +0000 (0:00:00.202) 0:01:05.446 ***** 2025-11-01 13:59:26.919830 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:26.919841 | orchestrator | 2025-11-01 13:59:26.919853 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 13:59:26.919865 | orchestrator | Saturday 01 November 2025 13:59:20 +0000 (0:00:00.202) 0:01:05.648 ***** 2025-11-01 13:59:26.919877 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:26.919889 | orchestrator | 2025-11-01 13:59:26.919901 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-11-01 13:59:26.919912 | orchestrator | Saturday 01 November 2025 13:59:21 +0000 (0:00:00.227) 0:01:05.875 ***** 2025-11-01 13:59:26.919924 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:26.919936 | orchestrator | 2025-11-01 13:59:26.919948 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-11-01 13:59:26.919960 | orchestrator | Saturday 01 November 2025 13:59:21 +0000 (0:00:00.404) 0:01:06.280 ***** 2025-11-01 13:59:26.919972 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f'}}) 2025-11-01 13:59:26.919985 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7e540012-4fa7-591e-a498-149cbb5b09d9'}}) 2025-11-01 13:59:26.920006 | orchestrator | 2025-11-01 13:59:26.920017 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-11-01 13:59:26.920028 | orchestrator | Saturday 01 November 2025 13:59:21 +0000 (0:00:00.222) 0:01:06.503 ***** 2025-11-01 13:59:26.920040 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f', 'data_vg': 'ceph-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f'}) 2025-11-01 13:59:26.920052 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7e540012-4fa7-591e-a498-149cbb5b09d9', 'data_vg': 'ceph-7e540012-4fa7-591e-a498-149cbb5b09d9'}) 2025-11-01 13:59:26.920062 | orchestrator | 2025-11-01 13:59:26.920073 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-11-01 13:59:26.920099 | orchestrator | Saturday 01 November 2025 13:59:23 +0000 (0:00:01.930) 0:01:08.434 ***** 2025-11-01 13:59:26.920111 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f', 'data_vg': 'ceph-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f'})  2025-11-01 13:59:26.920124 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e540012-4fa7-591e-a498-149cbb5b09d9', 'data_vg': 'ceph-7e540012-4fa7-591e-a498-149cbb5b09d9'})  2025-11-01 13:59:26.920134 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:26.920145 | orchestrator | 2025-11-01 13:59:26.920156 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-11-01 13:59:26.920167 | orchestrator | Saturday 01 November 2025 13:59:23 +0000 (0:00:00.180) 0:01:08.614 ***** 2025-11-01 13:59:26.920177 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f', 'data_vg': 'ceph-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f'}) 2025-11-01 13:59:26.920205 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7e540012-4fa7-591e-a498-149cbb5b09d9', 'data_vg': 'ceph-7e540012-4fa7-591e-a498-149cbb5b09d9'}) 2025-11-01 13:59:26.920217 | orchestrator | 2025-11-01 13:59:26.920228 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-11-01 13:59:26.920238 | orchestrator | Saturday 01 November 2025 13:59:25 +0000 (0:00:01.446) 0:01:10.061 ***** 2025-11-01 13:59:26.920249 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f', 'data_vg': 'ceph-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f'})  2025-11-01 13:59:26.920260 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e540012-4fa7-591e-a498-149cbb5b09d9', 'data_vg': 'ceph-7e540012-4fa7-591e-a498-149cbb5b09d9'})  2025-11-01 13:59:26.920271 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:26.920281 | orchestrator | 2025-11-01 13:59:26.920292 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-11-01 13:59:26.920303 | orchestrator | Saturday 01 November 2025 13:59:25 +0000 (0:00:00.173) 0:01:10.235 ***** 2025-11-01 13:59:26.920313 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:26.920324 | orchestrator | 2025-11-01 13:59:26.920335 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-11-01 13:59:26.920345 | orchestrator | Saturday 01 November 2025 13:59:25 +0000 (0:00:00.159) 0:01:10.394 ***** 2025-11-01 13:59:26.920356 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f', 'data_vg': 'ceph-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f'})  2025-11-01 13:59:26.920371 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e540012-4fa7-591e-a498-149cbb5b09d9', 'data_vg': 'ceph-7e540012-4fa7-591e-a498-149cbb5b09d9'})  2025-11-01 13:59:26.920403 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:26.920414 | orchestrator | 2025-11-01 13:59:26.920425 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-11-01 13:59:26.920436 | orchestrator | Saturday 01 November 2025 13:59:25 +0000 (0:00:00.167) 0:01:10.562 ***** 2025-11-01 13:59:26.920447 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:26.920465 | orchestrator | 2025-11-01 13:59:26.920476 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-11-01 13:59:26.920486 | orchestrator | Saturday 01 November 2025 13:59:25 +0000 (0:00:00.152) 0:01:10.715 ***** 2025-11-01 13:59:26.920497 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f', 'data_vg': 'ceph-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f'})  2025-11-01 13:59:26.920508 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e540012-4fa7-591e-a498-149cbb5b09d9', 'data_vg': 'ceph-7e540012-4fa7-591e-a498-149cbb5b09d9'})  2025-11-01 13:59:26.920519 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:26.920529 | orchestrator | 2025-11-01 13:59:26.920540 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-11-01 13:59:26.920550 | orchestrator | Saturday 01 November 2025 13:59:26 +0000 (0:00:00.158) 0:01:10.873 ***** 2025-11-01 13:59:26.920561 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:26.920572 | orchestrator | 2025-11-01 13:59:26.920582 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-11-01 13:59:26.920593 | orchestrator | Saturday 01 November 2025 13:59:26 +0000 (0:00:00.141) 0:01:11.014 ***** 2025-11-01 13:59:26.920604 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f', 'data_vg': 'ceph-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f'})  2025-11-01 13:59:26.920615 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e540012-4fa7-591e-a498-149cbb5b09d9', 'data_vg': 'ceph-7e540012-4fa7-591e-a498-149cbb5b09d9'})  2025-11-01 13:59:26.920625 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:26.920636 | orchestrator | 2025-11-01 13:59:26.920647 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-11-01 13:59:26.920657 | orchestrator | Saturday 01 November 2025 13:59:26 +0000 (0:00:00.192) 0:01:11.206 ***** 2025-11-01 13:59:26.920668 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:59:26.920679 | orchestrator | 2025-11-01 13:59:26.920689 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-11-01 13:59:26.920700 | orchestrator | Saturday 01 November 2025 13:59:26 +0000 (0:00:00.379) 0:01:11.585 ***** 2025-11-01 13:59:26.920719 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f', 'data_vg': 'ceph-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f'})  2025-11-01 13:59:33.579597 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e540012-4fa7-591e-a498-149cbb5b09d9', 'data_vg': 'ceph-7e540012-4fa7-591e-a498-149cbb5b09d9'})  2025-11-01 13:59:33.579703 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:33.579719 | orchestrator | 2025-11-01 13:59:33.579732 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-11-01 13:59:33.579744 | orchestrator | Saturday 01 November 2025 13:59:26 +0000 (0:00:00.171) 0:01:11.757 ***** 2025-11-01 13:59:33.579756 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f', 'data_vg': 'ceph-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f'})  2025-11-01 13:59:33.579767 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e540012-4fa7-591e-a498-149cbb5b09d9', 'data_vg': 'ceph-7e540012-4fa7-591e-a498-149cbb5b09d9'})  2025-11-01 13:59:33.579778 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:33.579789 | orchestrator | 2025-11-01 13:59:33.579801 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-11-01 13:59:33.579812 | orchestrator | Saturday 01 November 2025 13:59:27 +0000 (0:00:00.165) 0:01:11.922 ***** 2025-11-01 13:59:33.579823 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f', 'data_vg': 'ceph-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f'})  2025-11-01 13:59:33.579834 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e540012-4fa7-591e-a498-149cbb5b09d9', 'data_vg': 'ceph-7e540012-4fa7-591e-a498-149cbb5b09d9'})  2025-11-01 13:59:33.579845 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:33.579880 | orchestrator | 2025-11-01 13:59:33.579892 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-11-01 13:59:33.579903 | orchestrator | Saturday 01 November 2025 13:59:27 +0000 (0:00:00.166) 0:01:12.088 ***** 2025-11-01 13:59:33.579914 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:33.579924 | orchestrator | 2025-11-01 13:59:33.579935 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-11-01 13:59:33.579946 | orchestrator | Saturday 01 November 2025 13:59:27 +0000 (0:00:00.162) 0:01:12.251 ***** 2025-11-01 13:59:33.579957 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:33.579968 | orchestrator | 2025-11-01 13:59:33.579978 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-11-01 13:59:33.579989 | orchestrator | Saturday 01 November 2025 13:59:27 +0000 (0:00:00.160) 0:01:12.411 ***** 2025-11-01 13:59:33.579999 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:33.580010 | orchestrator | 2025-11-01 13:59:33.580021 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-11-01 13:59:33.580045 | orchestrator | Saturday 01 November 2025 13:59:27 +0000 (0:00:00.155) 0:01:12.567 ***** 2025-11-01 13:59:33.580056 | orchestrator | ok: [testbed-node-5] => { 2025-11-01 13:59:33.580068 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-11-01 13:59:33.580079 | orchestrator | } 2025-11-01 13:59:33.580089 | orchestrator | 2025-11-01 13:59:33.580100 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-11-01 13:59:33.580111 | orchestrator | Saturday 01 November 2025 13:59:27 +0000 (0:00:00.143) 0:01:12.710 ***** 2025-11-01 13:59:33.580122 | orchestrator | ok: [testbed-node-5] => { 2025-11-01 13:59:33.580133 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-11-01 13:59:33.580145 | orchestrator | } 2025-11-01 13:59:33.580157 | orchestrator | 2025-11-01 13:59:33.580169 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-11-01 13:59:33.580182 | orchestrator | Saturday 01 November 2025 13:59:28 +0000 (0:00:00.150) 0:01:12.860 ***** 2025-11-01 13:59:33.580194 | orchestrator | ok: [testbed-node-5] => { 2025-11-01 13:59:33.580206 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-11-01 13:59:33.580218 | orchestrator | } 2025-11-01 13:59:33.580229 | orchestrator | 2025-11-01 13:59:33.580241 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-11-01 13:59:33.580253 | orchestrator | Saturday 01 November 2025 13:59:28 +0000 (0:00:00.155) 0:01:13.016 ***** 2025-11-01 13:59:33.580265 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:59:33.580277 | orchestrator | 2025-11-01 13:59:33.580290 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-11-01 13:59:33.580301 | orchestrator | Saturday 01 November 2025 13:59:28 +0000 (0:00:00.554) 0:01:13.571 ***** 2025-11-01 13:59:33.580313 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:59:33.580325 | orchestrator | 2025-11-01 13:59:33.580337 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-11-01 13:59:33.580349 | orchestrator | Saturday 01 November 2025 13:59:29 +0000 (0:00:00.562) 0:01:14.133 ***** 2025-11-01 13:59:33.580360 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:59:33.580372 | orchestrator | 2025-11-01 13:59:33.580408 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-11-01 13:59:33.580421 | orchestrator | Saturday 01 November 2025 13:59:30 +0000 (0:00:00.773) 0:01:14.907 ***** 2025-11-01 13:59:33.580433 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:59:33.580446 | orchestrator | 2025-11-01 13:59:33.580457 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-11-01 13:59:33.580469 | orchestrator | Saturday 01 November 2025 13:59:30 +0000 (0:00:00.172) 0:01:15.079 ***** 2025-11-01 13:59:33.580482 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:33.580494 | orchestrator | 2025-11-01 13:59:33.580505 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-11-01 13:59:33.580515 | orchestrator | Saturday 01 November 2025 13:59:30 +0000 (0:00:00.119) 0:01:15.199 ***** 2025-11-01 13:59:33.580534 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:33.580545 | orchestrator | 2025-11-01 13:59:33.580556 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-11-01 13:59:33.580566 | orchestrator | Saturday 01 November 2025 13:59:30 +0000 (0:00:00.152) 0:01:15.351 ***** 2025-11-01 13:59:33.580577 | orchestrator | ok: [testbed-node-5] => { 2025-11-01 13:59:33.580588 | orchestrator |  "vgs_report": { 2025-11-01 13:59:33.580599 | orchestrator |  "vg": [] 2025-11-01 13:59:33.580625 | orchestrator |  } 2025-11-01 13:59:33.580637 | orchestrator | } 2025-11-01 13:59:33.580648 | orchestrator | 2025-11-01 13:59:33.580658 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-11-01 13:59:33.580669 | orchestrator | Saturday 01 November 2025 13:59:30 +0000 (0:00:00.150) 0:01:15.502 ***** 2025-11-01 13:59:33.580680 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:33.580691 | orchestrator | 2025-11-01 13:59:33.580701 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-11-01 13:59:33.580712 | orchestrator | Saturday 01 November 2025 13:59:30 +0000 (0:00:00.155) 0:01:15.657 ***** 2025-11-01 13:59:33.580723 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:33.580733 | orchestrator | 2025-11-01 13:59:33.580744 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-11-01 13:59:33.580755 | orchestrator | Saturday 01 November 2025 13:59:30 +0000 (0:00:00.156) 0:01:15.813 ***** 2025-11-01 13:59:33.580765 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:33.580776 | orchestrator | 2025-11-01 13:59:33.580787 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-11-01 13:59:33.580797 | orchestrator | Saturday 01 November 2025 13:59:31 +0000 (0:00:00.134) 0:01:15.948 ***** 2025-11-01 13:59:33.580808 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:33.580819 | orchestrator | 2025-11-01 13:59:33.580829 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-11-01 13:59:33.580840 | orchestrator | Saturday 01 November 2025 13:59:31 +0000 (0:00:00.147) 0:01:16.096 ***** 2025-11-01 13:59:33.580851 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:33.580861 | orchestrator | 2025-11-01 13:59:33.580872 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-11-01 13:59:33.580882 | orchestrator | Saturday 01 November 2025 13:59:31 +0000 (0:00:00.150) 0:01:16.246 ***** 2025-11-01 13:59:33.580893 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:33.580904 | orchestrator | 2025-11-01 13:59:33.580914 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-11-01 13:59:33.580925 | orchestrator | Saturday 01 November 2025 13:59:31 +0000 (0:00:00.209) 0:01:16.456 ***** 2025-11-01 13:59:33.580935 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:33.580946 | orchestrator | 2025-11-01 13:59:33.580957 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-11-01 13:59:33.580967 | orchestrator | Saturday 01 November 2025 13:59:31 +0000 (0:00:00.153) 0:01:16.609 ***** 2025-11-01 13:59:33.580978 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:33.580988 | orchestrator | 2025-11-01 13:59:33.580999 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-11-01 13:59:33.581010 | orchestrator | Saturday 01 November 2025 13:59:32 +0000 (0:00:00.383) 0:01:16.993 ***** 2025-11-01 13:59:33.581020 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:33.581031 | orchestrator | 2025-11-01 13:59:33.581042 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-11-01 13:59:33.581058 | orchestrator | Saturday 01 November 2025 13:59:32 +0000 (0:00:00.162) 0:01:17.156 ***** 2025-11-01 13:59:33.581070 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:33.581080 | orchestrator | 2025-11-01 13:59:33.581091 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-11-01 13:59:33.581102 | orchestrator | Saturday 01 November 2025 13:59:32 +0000 (0:00:00.173) 0:01:17.330 ***** 2025-11-01 13:59:33.581112 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:33.581130 | orchestrator | 2025-11-01 13:59:33.581141 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-11-01 13:59:33.581152 | orchestrator | Saturday 01 November 2025 13:59:32 +0000 (0:00:00.133) 0:01:17.463 ***** 2025-11-01 13:59:33.581162 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:33.581173 | orchestrator | 2025-11-01 13:59:33.581184 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-11-01 13:59:33.581194 | orchestrator | Saturday 01 November 2025 13:59:32 +0000 (0:00:00.150) 0:01:17.614 ***** 2025-11-01 13:59:33.581205 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:33.581216 | orchestrator | 2025-11-01 13:59:33.581227 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-11-01 13:59:33.581237 | orchestrator | Saturday 01 November 2025 13:59:32 +0000 (0:00:00.157) 0:01:17.771 ***** 2025-11-01 13:59:33.581248 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:33.581259 | orchestrator | 2025-11-01 13:59:33.581269 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-11-01 13:59:33.581280 | orchestrator | Saturday 01 November 2025 13:59:33 +0000 (0:00:00.143) 0:01:17.914 ***** 2025-11-01 13:59:33.581291 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f', 'data_vg': 'ceph-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f'})  2025-11-01 13:59:33.581302 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e540012-4fa7-591e-a498-149cbb5b09d9', 'data_vg': 'ceph-7e540012-4fa7-591e-a498-149cbb5b09d9'})  2025-11-01 13:59:33.581313 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:33.581323 | orchestrator | 2025-11-01 13:59:33.581334 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-11-01 13:59:33.581345 | orchestrator | Saturday 01 November 2025 13:59:33 +0000 (0:00:00.164) 0:01:18.079 ***** 2025-11-01 13:59:33.581356 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f', 'data_vg': 'ceph-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f'})  2025-11-01 13:59:33.581366 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e540012-4fa7-591e-a498-149cbb5b09d9', 'data_vg': 'ceph-7e540012-4fa7-591e-a498-149cbb5b09d9'})  2025-11-01 13:59:33.581377 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:33.581407 | orchestrator | 2025-11-01 13:59:33.581418 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-11-01 13:59:33.581428 | orchestrator | Saturday 01 November 2025 13:59:33 +0000 (0:00:00.171) 0:01:18.251 ***** 2025-11-01 13:59:33.581445 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f', 'data_vg': 'ceph-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f'})  2025-11-01 13:59:36.795213 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e540012-4fa7-591e-a498-149cbb5b09d9', 'data_vg': 'ceph-7e540012-4fa7-591e-a498-149cbb5b09d9'})  2025-11-01 13:59:36.795288 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:36.795303 | orchestrator | 2025-11-01 13:59:36.795316 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-11-01 13:59:36.795328 | orchestrator | Saturday 01 November 2025 13:59:33 +0000 (0:00:00.167) 0:01:18.418 ***** 2025-11-01 13:59:36.795339 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f', 'data_vg': 'ceph-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f'})  2025-11-01 13:59:36.795350 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e540012-4fa7-591e-a498-149cbb5b09d9', 'data_vg': 'ceph-7e540012-4fa7-591e-a498-149cbb5b09d9'})  2025-11-01 13:59:36.795361 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:36.795372 | orchestrator | 2025-11-01 13:59:36.795426 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-11-01 13:59:36.795438 | orchestrator | Saturday 01 November 2025 13:59:33 +0000 (0:00:00.146) 0:01:18.565 ***** 2025-11-01 13:59:36.795449 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f', 'data_vg': 'ceph-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f'})  2025-11-01 13:59:36.795480 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e540012-4fa7-591e-a498-149cbb5b09d9', 'data_vg': 'ceph-7e540012-4fa7-591e-a498-149cbb5b09d9'})  2025-11-01 13:59:36.795491 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:36.795502 | orchestrator | 2025-11-01 13:59:36.795513 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-11-01 13:59:36.795524 | orchestrator | Saturday 01 November 2025 13:59:33 +0000 (0:00:00.179) 0:01:18.744 ***** 2025-11-01 13:59:36.795535 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f', 'data_vg': 'ceph-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f'})  2025-11-01 13:59:36.795545 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e540012-4fa7-591e-a498-149cbb5b09d9', 'data_vg': 'ceph-7e540012-4fa7-591e-a498-149cbb5b09d9'})  2025-11-01 13:59:36.795556 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:36.795567 | orchestrator | 2025-11-01 13:59:36.795578 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-11-01 13:59:36.795589 | orchestrator | Saturday 01 November 2025 13:59:34 +0000 (0:00:00.403) 0:01:19.148 ***** 2025-11-01 13:59:36.795600 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f', 'data_vg': 'ceph-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f'})  2025-11-01 13:59:36.795611 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e540012-4fa7-591e-a498-149cbb5b09d9', 'data_vg': 'ceph-7e540012-4fa7-591e-a498-149cbb5b09d9'})  2025-11-01 13:59:36.795622 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:36.795632 | orchestrator | 2025-11-01 13:59:36.795643 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-11-01 13:59:36.795654 | orchestrator | Saturday 01 November 2025 13:59:34 +0000 (0:00:00.163) 0:01:19.311 ***** 2025-11-01 13:59:36.795665 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f', 'data_vg': 'ceph-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f'})  2025-11-01 13:59:36.795676 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e540012-4fa7-591e-a498-149cbb5b09d9', 'data_vg': 'ceph-7e540012-4fa7-591e-a498-149cbb5b09d9'})  2025-11-01 13:59:36.795687 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:36.795697 | orchestrator | 2025-11-01 13:59:36.795708 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-11-01 13:59:36.795719 | orchestrator | Saturday 01 November 2025 13:59:34 +0000 (0:00:00.161) 0:01:19.473 ***** 2025-11-01 13:59:36.795730 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:59:36.795741 | orchestrator | 2025-11-01 13:59:36.795752 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-11-01 13:59:36.795762 | orchestrator | Saturday 01 November 2025 13:59:35 +0000 (0:00:00.509) 0:01:19.983 ***** 2025-11-01 13:59:36.795774 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:59:36.795786 | orchestrator | 2025-11-01 13:59:36.795798 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-11-01 13:59:36.795809 | orchestrator | Saturday 01 November 2025 13:59:35 +0000 (0:00:00.574) 0:01:20.557 ***** 2025-11-01 13:59:36.795822 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:59:36.795834 | orchestrator | 2025-11-01 13:59:36.795846 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-11-01 13:59:36.795858 | orchestrator | Saturday 01 November 2025 13:59:35 +0000 (0:00:00.162) 0:01:20.719 ***** 2025-11-01 13:59:36.795870 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-7e540012-4fa7-591e-a498-149cbb5b09d9', 'vg_name': 'ceph-7e540012-4fa7-591e-a498-149cbb5b09d9'}) 2025-11-01 13:59:36.795883 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f', 'vg_name': 'ceph-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f'}) 2025-11-01 13:59:36.795895 | orchestrator | 2025-11-01 13:59:36.795907 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-11-01 13:59:36.795928 | orchestrator | Saturday 01 November 2025 13:59:36 +0000 (0:00:00.175) 0:01:20.895 ***** 2025-11-01 13:59:36.795954 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f', 'data_vg': 'ceph-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f'})  2025-11-01 13:59:36.795967 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e540012-4fa7-591e-a498-149cbb5b09d9', 'data_vg': 'ceph-7e540012-4fa7-591e-a498-149cbb5b09d9'})  2025-11-01 13:59:36.795980 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:36.795991 | orchestrator | 2025-11-01 13:59:36.796003 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-11-01 13:59:36.796015 | orchestrator | Saturday 01 November 2025 13:59:36 +0000 (0:00:00.170) 0:01:21.065 ***** 2025-11-01 13:59:36.796027 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f', 'data_vg': 'ceph-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f'})  2025-11-01 13:59:36.796039 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e540012-4fa7-591e-a498-149cbb5b09d9', 'data_vg': 'ceph-7e540012-4fa7-591e-a498-149cbb5b09d9'})  2025-11-01 13:59:36.796052 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:36.796063 | orchestrator | 2025-11-01 13:59:36.796075 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-11-01 13:59:36.796087 | orchestrator | Saturday 01 November 2025 13:59:36 +0000 (0:00:00.181) 0:01:21.247 ***** 2025-11-01 13:59:36.796099 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f', 'data_vg': 'ceph-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f'})  2025-11-01 13:59:36.796127 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e540012-4fa7-591e-a498-149cbb5b09d9', 'data_vg': 'ceph-7e540012-4fa7-591e-a498-149cbb5b09d9'})  2025-11-01 13:59:36.796139 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:59:36.796150 | orchestrator | 2025-11-01 13:59:36.796160 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-11-01 13:59:36.796171 | orchestrator | Saturday 01 November 2025 13:59:36 +0000 (0:00:00.182) 0:01:21.430 ***** 2025-11-01 13:59:36.796182 | orchestrator | ok: [testbed-node-5] => { 2025-11-01 13:59:36.796193 | orchestrator |  "lvm_report": { 2025-11-01 13:59:36.796204 | orchestrator |  "lv": [ 2025-11-01 13:59:36.796214 | orchestrator |  { 2025-11-01 13:59:36.796225 | orchestrator |  "lv_name": "osd-block-7e540012-4fa7-591e-a498-149cbb5b09d9", 2025-11-01 13:59:36.796241 | orchestrator |  "vg_name": "ceph-7e540012-4fa7-591e-a498-149cbb5b09d9" 2025-11-01 13:59:36.796252 | orchestrator |  }, 2025-11-01 13:59:36.796263 | orchestrator |  { 2025-11-01 13:59:36.796274 | orchestrator |  "lv_name": "osd-block-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f", 2025-11-01 13:59:36.796284 | orchestrator |  "vg_name": "ceph-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f" 2025-11-01 13:59:36.796295 | orchestrator |  } 2025-11-01 13:59:36.796306 | orchestrator |  ], 2025-11-01 13:59:36.796317 | orchestrator |  "pv": [ 2025-11-01 13:59:36.796328 | orchestrator |  { 2025-11-01 13:59:36.796338 | orchestrator |  "pv_name": "/dev/sdb", 2025-11-01 13:59:36.796349 | orchestrator |  "vg_name": "ceph-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f" 2025-11-01 13:59:36.796360 | orchestrator |  }, 2025-11-01 13:59:36.796371 | orchestrator |  { 2025-11-01 13:59:36.796401 | orchestrator |  "pv_name": "/dev/sdc", 2025-11-01 13:59:36.796412 | orchestrator |  "vg_name": "ceph-7e540012-4fa7-591e-a498-149cbb5b09d9" 2025-11-01 13:59:36.796423 | orchestrator |  } 2025-11-01 13:59:36.796434 | orchestrator |  ] 2025-11-01 13:59:36.796444 | orchestrator |  } 2025-11-01 13:59:36.796455 | orchestrator | } 2025-11-01 13:59:36.796466 | orchestrator | 2025-11-01 13:59:36.796477 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:59:36.796494 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-11-01 13:59:36.796506 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-11-01 13:59:36.796517 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-11-01 13:59:36.796528 | orchestrator | 2025-11-01 13:59:36.796538 | orchestrator | 2025-11-01 13:59:36.796549 | orchestrator | 2025-11-01 13:59:36.796560 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:59:36.796570 | orchestrator | Saturday 01 November 2025 13:59:36 +0000 (0:00:00.176) 0:01:21.607 ***** 2025-11-01 13:59:36.796581 | orchestrator | =============================================================================== 2025-11-01 13:59:36.796592 | orchestrator | Create block VGs -------------------------------------------------------- 6.11s 2025-11-01 13:59:36.796602 | orchestrator | Create block LVs -------------------------------------------------------- 4.37s 2025-11-01 13:59:36.796613 | orchestrator | Add known partitions to the list of available block devices ------------- 2.02s 2025-11-01 13:59:36.796624 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.93s 2025-11-01 13:59:36.796634 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.91s 2025-11-01 13:59:36.796645 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.76s 2025-11-01 13:59:36.796656 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.65s 2025-11-01 13:59:36.796667 | orchestrator | Add known links to the list of available block devices ------------------ 1.53s 2025-11-01 13:59:36.796683 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.52s 2025-11-01 13:59:37.244051 | orchestrator | Add known partitions to the list of available block devices ------------- 1.17s 2025-11-01 13:59:37.244110 | orchestrator | Print LVM report data --------------------------------------------------- 1.05s 2025-11-01 13:59:37.244122 | orchestrator | Add known links to the list of available block devices ------------------ 0.96s 2025-11-01 13:59:37.244133 | orchestrator | Add known partitions to the list of available block devices ------------- 0.93s 2025-11-01 13:59:37.244144 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.89s 2025-11-01 13:59:37.244154 | orchestrator | Add known partitions to the list of available block devices ------------- 0.89s 2025-11-01 13:59:37.244165 | orchestrator | Add known partitions to the list of available block devices ------------- 0.85s 2025-11-01 13:59:37.244176 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.79s 2025-11-01 13:59:37.244187 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.78s 2025-11-01 13:59:37.244197 | orchestrator | Get initial list of available block devices ----------------------------- 0.77s 2025-11-01 13:59:37.244208 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2025-11-01 13:59:49.836621 | orchestrator | 2025-11-01 13:59:49 | INFO  | Task 82116c1c-29d2-4809-b6c4-7156647ef13f (facts) was prepared for execution. 2025-11-01 13:59:49.836730 | orchestrator | 2025-11-01 13:59:49 | INFO  | It takes a moment until task 82116c1c-29d2-4809-b6c4-7156647ef13f (facts) has been started and output is visible here. 2025-11-01 14:00:03.767522 | orchestrator | 2025-11-01 14:00:03.767636 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-11-01 14:00:03.767665 | orchestrator | 2025-11-01 14:00:03.767687 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-11-01 14:00:03.767707 | orchestrator | Saturday 01 November 2025 13:59:54 +0000 (0:00:00.264) 0:00:00.264 ***** 2025-11-01 14:00:03.767728 | orchestrator | ok: [testbed-manager] 2025-11-01 14:00:03.767749 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:00:03.767792 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:00:03.767804 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:00:03.767815 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:00:03.767826 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:00:03.767836 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:00:03.767847 | orchestrator | 2025-11-01 14:00:03.767858 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-11-01 14:00:03.767868 | orchestrator | Saturday 01 November 2025 13:59:55 +0000 (0:00:01.138) 0:00:01.402 ***** 2025-11-01 14:00:03.767890 | orchestrator | skipping: [testbed-manager] 2025-11-01 14:00:03.767902 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:00:03.767913 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:00:03.767924 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:00:03.767935 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:00:03.767945 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:00:03.767956 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:00:03.767967 | orchestrator | 2025-11-01 14:00:03.767978 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-11-01 14:00:03.767988 | orchestrator | 2025-11-01 14:00:03.767999 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-01 14:00:03.768010 | orchestrator | Saturday 01 November 2025 13:59:56 +0000 (0:00:01.317) 0:00:02.720 ***** 2025-11-01 14:00:03.768020 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:00:03.768031 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:00:03.768042 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:00:03.768052 | orchestrator | ok: [testbed-manager] 2025-11-01 14:00:03.768063 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:00:03.768073 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:00:03.768086 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:00:03.768099 | orchestrator | 2025-11-01 14:00:03.768111 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-11-01 14:00:03.768124 | orchestrator | 2025-11-01 14:00:03.768137 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-11-01 14:00:03.768149 | orchestrator | Saturday 01 November 2025 14:00:02 +0000 (0:00:05.849) 0:00:08.570 ***** 2025-11-01 14:00:03.768161 | orchestrator | skipping: [testbed-manager] 2025-11-01 14:00:03.768174 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:00:03.768187 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:00:03.768200 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:00:03.768212 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:00:03.768224 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:00:03.768237 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:00:03.768249 | orchestrator | 2025-11-01 14:00:03.768261 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:00:03.768274 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 14:00:03.768288 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 14:00:03.768300 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 14:00:03.768313 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 14:00:03.768325 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 14:00:03.768338 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 14:00:03.768350 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 14:00:03.768372 | orchestrator | 2025-11-01 14:00:03.768407 | orchestrator | 2025-11-01 14:00:03.768420 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:00:03.768433 | orchestrator | Saturday 01 November 2025 14:00:03 +0000 (0:00:00.589) 0:00:09.159 ***** 2025-11-01 14:00:03.768446 | orchestrator | =============================================================================== 2025-11-01 14:00:03.768456 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.85s 2025-11-01 14:00:03.768467 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.32s 2025-11-01 14:00:03.768478 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.14s 2025-11-01 14:00:03.768488 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2025-11-01 14:00:16.434286 | orchestrator | 2025-11-01 14:00:16 | INFO  | Task 67462559-4047-45bd-b988-e30cc8f70fd3 (frr) was prepared for execution. 2025-11-01 14:00:16.434375 | orchestrator | 2025-11-01 14:00:16 | INFO  | It takes a moment until task 67462559-4047-45bd-b988-e30cc8f70fd3 (frr) has been started and output is visible here. 2025-11-01 14:00:44.578136 | orchestrator | 2025-11-01 14:00:44.578246 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-11-01 14:00:44.578262 | orchestrator | 2025-11-01 14:00:44.578274 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-11-01 14:00:44.578286 | orchestrator | Saturday 01 November 2025 14:00:20 +0000 (0:00:00.244) 0:00:00.244 ***** 2025-11-01 14:00:44.578297 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-11-01 14:00:44.578310 | orchestrator | 2025-11-01 14:00:44.578321 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-11-01 14:00:44.578332 | orchestrator | Saturday 01 November 2025 14:00:21 +0000 (0:00:00.251) 0:00:00.496 ***** 2025-11-01 14:00:44.578343 | orchestrator | changed: [testbed-manager] 2025-11-01 14:00:44.578355 | orchestrator | 2025-11-01 14:00:44.578366 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-11-01 14:00:44.578377 | orchestrator | Saturday 01 November 2025 14:00:22 +0000 (0:00:01.256) 0:00:01.752 ***** 2025-11-01 14:00:44.578421 | orchestrator | changed: [testbed-manager] 2025-11-01 14:00:44.578433 | orchestrator | 2025-11-01 14:00:44.578460 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-11-01 14:00:44.578472 | orchestrator | Saturday 01 November 2025 14:00:33 +0000 (0:00:10.741) 0:00:12.494 ***** 2025-11-01 14:00:44.578482 | orchestrator | ok: [testbed-manager] 2025-11-01 14:00:44.578494 | orchestrator | 2025-11-01 14:00:44.578505 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-11-01 14:00:44.578515 | orchestrator | Saturday 01 November 2025 14:00:34 +0000 (0:00:01.117) 0:00:13.611 ***** 2025-11-01 14:00:44.578526 | orchestrator | changed: [testbed-manager] 2025-11-01 14:00:44.578537 | orchestrator | 2025-11-01 14:00:44.578548 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-11-01 14:00:44.578558 | orchestrator | Saturday 01 November 2025 14:00:35 +0000 (0:00:01.008) 0:00:14.620 ***** 2025-11-01 14:00:44.578569 | orchestrator | ok: [testbed-manager] 2025-11-01 14:00:44.578580 | orchestrator | 2025-11-01 14:00:44.578591 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-11-01 14:00:44.578602 | orchestrator | Saturday 01 November 2025 14:00:36 +0000 (0:00:01.379) 0:00:16.000 ***** 2025-11-01 14:00:44.578613 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-01 14:00:44.578623 | orchestrator | 2025-11-01 14:00:44.578636 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-11-01 14:00:44.578648 | orchestrator | Saturday 01 November 2025 14:00:37 +0000 (0:00:00.861) 0:00:16.862 ***** 2025-11-01 14:00:44.578661 | orchestrator | skipping: [testbed-manager] 2025-11-01 14:00:44.578673 | orchestrator | 2025-11-01 14:00:44.578686 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-11-01 14:00:44.578720 | orchestrator | Saturday 01 November 2025 14:00:37 +0000 (0:00:00.157) 0:00:17.019 ***** 2025-11-01 14:00:44.578733 | orchestrator | changed: [testbed-manager] 2025-11-01 14:00:44.578746 | orchestrator | 2025-11-01 14:00:44.578758 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-11-01 14:00:44.578770 | orchestrator | Saturday 01 November 2025 14:00:38 +0000 (0:00:01.016) 0:00:18.036 ***** 2025-11-01 14:00:44.578782 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-11-01 14:00:44.578795 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-11-01 14:00:44.578808 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-11-01 14:00:44.578820 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-11-01 14:00:44.578833 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-11-01 14:00:44.578846 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-11-01 14:00:44.578858 | orchestrator | 2025-11-01 14:00:44.578870 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-11-01 14:00:44.578883 | orchestrator | Saturday 01 November 2025 14:00:41 +0000 (0:00:02.332) 0:00:20.369 ***** 2025-11-01 14:00:44.578895 | orchestrator | ok: [testbed-manager] 2025-11-01 14:00:44.578907 | orchestrator | 2025-11-01 14:00:44.578920 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-11-01 14:00:44.578932 | orchestrator | Saturday 01 November 2025 14:00:42 +0000 (0:00:01.783) 0:00:22.152 ***** 2025-11-01 14:00:44.578944 | orchestrator | changed: [testbed-manager] 2025-11-01 14:00:44.578957 | orchestrator | 2025-11-01 14:00:44.578970 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:00:44.578983 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 14:00:44.578994 | orchestrator | 2025-11-01 14:00:44.579005 | orchestrator | 2025-11-01 14:00:44.579016 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:00:44.579027 | orchestrator | Saturday 01 November 2025 14:00:44 +0000 (0:00:01.470) 0:00:23.622 ***** 2025-11-01 14:00:44.579037 | orchestrator | =============================================================================== 2025-11-01 14:00:44.579048 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.74s 2025-11-01 14:00:44.579059 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.33s 2025-11-01 14:00:44.579069 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.78s 2025-11-01 14:00:44.579080 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.47s 2025-11-01 14:00:44.579108 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.38s 2025-11-01 14:00:44.579119 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.26s 2025-11-01 14:00:44.579130 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.12s 2025-11-01 14:00:44.579141 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 1.02s 2025-11-01 14:00:44.579151 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.01s 2025-11-01 14:00:44.579162 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.86s 2025-11-01 14:00:44.579173 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.25s 2025-11-01 14:00:44.579184 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.16s 2025-11-01 14:00:44.900742 | orchestrator | 2025-11-01 14:00:44.905709 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Nov 1 14:00:44 UTC 2025 2025-11-01 14:00:44.905760 | orchestrator | 2025-11-01 14:00:46.961515 | orchestrator | 2025-11-01 14:00:46 | INFO  | Collection nutshell is prepared for execution 2025-11-01 14:00:46.961595 | orchestrator | 2025-11-01 14:00:46 | INFO  | D [0] - dotfiles 2025-11-01 14:00:56.995697 | orchestrator | 2025-11-01 14:00:56 | INFO  | D [0] - homer 2025-11-01 14:00:56.995782 | orchestrator | 2025-11-01 14:00:56 | INFO  | D [0] - netdata 2025-11-01 14:00:56.995792 | orchestrator | 2025-11-01 14:00:56 | INFO  | D [0] - openstackclient 2025-11-01 14:00:56.996848 | orchestrator | 2025-11-01 14:00:56 | INFO  | D [0] - phpmyadmin 2025-11-01 14:00:56.996936 | orchestrator | 2025-11-01 14:00:56 | INFO  | A [0] - common 2025-11-01 14:00:57.001864 | orchestrator | 2025-11-01 14:00:57 | INFO  | A [1] -- loadbalancer 2025-11-01 14:00:57.001895 | orchestrator | 2025-11-01 14:00:57 | INFO  | D [2] --- opensearch 2025-11-01 14:00:57.002227 | orchestrator | 2025-11-01 14:00:57 | INFO  | A [2] --- mariadb-ng 2025-11-01 14:00:57.002599 | orchestrator | 2025-11-01 14:00:57 | INFO  | D [3] ---- horizon 2025-11-01 14:00:57.003192 | orchestrator | 2025-11-01 14:00:57 | INFO  | A [3] ---- keystone 2025-11-01 14:00:57.003503 | orchestrator | 2025-11-01 14:00:57 | INFO  | A [4] ----- neutron 2025-11-01 14:00:57.003718 | orchestrator | 2025-11-01 14:00:57 | INFO  | A [5] ------ wait-for-nova 2025-11-01 14:00:57.003810 | orchestrator | 2025-11-01 14:00:57 | INFO  | D [6] ------- octavia 2025-11-01 14:00:57.005778 | orchestrator | 2025-11-01 14:00:57 | INFO  | D [4] ----- barbican 2025-11-01 14:00:57.005797 | orchestrator | 2025-11-01 14:00:57 | INFO  | D [4] ----- designate 2025-11-01 14:00:57.006170 | orchestrator | 2025-11-01 14:00:57 | INFO  | D [4] ----- ironic 2025-11-01 14:00:57.006192 | orchestrator | 2025-11-01 14:00:57 | INFO  | D [4] ----- placement 2025-11-01 14:00:57.006910 | orchestrator | 2025-11-01 14:00:57 | INFO  | D [4] ----- magnum 2025-11-01 14:00:57.007683 | orchestrator | 2025-11-01 14:00:57 | INFO  | A [1] -- openvswitch 2025-11-01 14:00:57.007879 | orchestrator | 2025-11-01 14:00:57 | INFO  | D [2] --- ovn 2025-11-01 14:00:57.008136 | orchestrator | 2025-11-01 14:00:57 | INFO  | D [1] -- memcached 2025-11-01 14:00:57.008451 | orchestrator | 2025-11-01 14:00:57 | INFO  | D [1] -- redis 2025-11-01 14:00:57.008472 | orchestrator | 2025-11-01 14:00:57 | INFO  | D [1] -- rabbitmq-ng 2025-11-01 14:00:57.009072 | orchestrator | 2025-11-01 14:00:57 | INFO  | A [0] - kubernetes 2025-11-01 14:00:57.012981 | orchestrator | 2025-11-01 14:00:57 | INFO  | D [1] -- kubeconfig 2025-11-01 14:00:57.013003 | orchestrator | 2025-11-01 14:00:57 | INFO  | A [1] -- copy-kubeconfig 2025-11-01 14:00:57.013241 | orchestrator | 2025-11-01 14:00:57 | INFO  | A [0] - ceph 2025-11-01 14:00:57.015881 | orchestrator | 2025-11-01 14:00:57 | INFO  | A [1] -- ceph-pools 2025-11-01 14:00:57.015902 | orchestrator | 2025-11-01 14:00:57 | INFO  | A [2] --- copy-ceph-keys 2025-11-01 14:00:57.015914 | orchestrator | 2025-11-01 14:00:57 | INFO  | A [3] ---- cephclient 2025-11-01 14:00:57.015925 | orchestrator | 2025-11-01 14:00:57 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-11-01 14:00:57.016326 | orchestrator | 2025-11-01 14:00:57 | INFO  | A [4] ----- wait-for-keystone 2025-11-01 14:00:57.016539 | orchestrator | 2025-11-01 14:00:57 | INFO  | D [5] ------ kolla-ceph-rgw 2025-11-01 14:00:57.016561 | orchestrator | 2025-11-01 14:00:57 | INFO  | D [5] ------ glance 2025-11-01 14:00:57.016574 | orchestrator | 2025-11-01 14:00:57 | INFO  | D [5] ------ cinder 2025-11-01 14:00:57.016585 | orchestrator | 2025-11-01 14:00:57 | INFO  | D [5] ------ nova 2025-11-01 14:00:57.016950 | orchestrator | 2025-11-01 14:00:57 | INFO  | A [4] ----- prometheus 2025-11-01 14:00:57.016989 | orchestrator | 2025-11-01 14:00:57 | INFO  | D [5] ------ grafana 2025-11-01 14:00:57.288693 | orchestrator | 2025-11-01 14:00:57 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-11-01 14:00:57.288761 | orchestrator | 2025-11-01 14:00:57 | INFO  | Tasks are running in the background 2025-11-01 14:01:00.759555 | orchestrator | 2025-11-01 14:01:00 | INFO  | No task IDs specified, wait for all currently running tasks 2025-11-01 14:01:02.906299 | orchestrator | 2025-11-01 14:01:02 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:01:02.907059 | orchestrator | 2025-11-01 14:01:02 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:01:02.911003 | orchestrator | 2025-11-01 14:01:02 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:01:02.912580 | orchestrator | 2025-11-01 14:01:02 | INFO  | Task 809f3cdb-63f1-4c0d-ac8c-f307210f56ea is in state STARTED 2025-11-01 14:01:02.913645 | orchestrator | 2025-11-01 14:01:02 | INFO  | Task 76ba6249-b5b3-4b2c-8b5e-61a8605b6420 is in state STARTED 2025-11-01 14:01:02.917563 | orchestrator | 2025-11-01 14:01:02 | INFO  | Task 666513bc-b4e0-4274-a9c8-90fbea532cf0 is in state STARTED 2025-11-01 14:01:02.918409 | orchestrator | 2025-11-01 14:01:02 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:01:02.918420 | orchestrator | 2025-11-01 14:01:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:01:05.979731 | orchestrator | 2025-11-01 14:01:05 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:01:05.979810 | orchestrator | 2025-11-01 14:01:05 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:01:05.980505 | orchestrator | 2025-11-01 14:01:05 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:01:05.984118 | orchestrator | 2025-11-01 14:01:05 | INFO  | Task 809f3cdb-63f1-4c0d-ac8c-f307210f56ea is in state STARTED 2025-11-01 14:01:05.987844 | orchestrator | 2025-11-01 14:01:05 | INFO  | Task 76ba6249-b5b3-4b2c-8b5e-61a8605b6420 is in state STARTED 2025-11-01 14:01:05.988498 | orchestrator | 2025-11-01 14:01:05 | INFO  | Task 666513bc-b4e0-4274-a9c8-90fbea532cf0 is in state STARTED 2025-11-01 14:01:05.989288 | orchestrator | 2025-11-01 14:01:05 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:01:05.989353 | orchestrator | 2025-11-01 14:01:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:01:09.085926 | orchestrator | 2025-11-01 14:01:09 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:01:09.088460 | orchestrator | 2025-11-01 14:01:09 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:01:09.088474 | orchestrator | 2025-11-01 14:01:09 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:01:09.088479 | orchestrator | 2025-11-01 14:01:09 | INFO  | Task 809f3cdb-63f1-4c0d-ac8c-f307210f56ea is in state STARTED 2025-11-01 14:01:09.089029 | orchestrator | 2025-11-01 14:01:09 | INFO  | Task 76ba6249-b5b3-4b2c-8b5e-61a8605b6420 is in state STARTED 2025-11-01 14:01:09.090292 | orchestrator | 2025-11-01 14:01:09 | INFO  | Task 666513bc-b4e0-4274-a9c8-90fbea532cf0 is in state STARTED 2025-11-01 14:01:09.091187 | orchestrator | 2025-11-01 14:01:09 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:01:09.091414 | orchestrator | 2025-11-01 14:01:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:01:12.221433 | orchestrator | 2025-11-01 14:01:12 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:01:12.221571 | orchestrator | 2025-11-01 14:01:12 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:01:12.222167 | orchestrator | 2025-11-01 14:01:12 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:01:12.222774 | orchestrator | 2025-11-01 14:01:12 | INFO  | Task 809f3cdb-63f1-4c0d-ac8c-f307210f56ea is in state STARTED 2025-11-01 14:01:12.223280 | orchestrator | 2025-11-01 14:01:12 | INFO  | Task 76ba6249-b5b3-4b2c-8b5e-61a8605b6420 is in state STARTED 2025-11-01 14:01:12.231860 | orchestrator | 2025-11-01 14:01:12 | INFO  | Task 666513bc-b4e0-4274-a9c8-90fbea532cf0 is in state STARTED 2025-11-01 14:01:12.231915 | orchestrator | 2025-11-01 14:01:12 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:01:12.231927 | orchestrator | 2025-11-01 14:01:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:01:15.382428 | orchestrator | 2025-11-01 14:01:15 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:01:15.382527 | orchestrator | 2025-11-01 14:01:15 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:01:15.382543 | orchestrator | 2025-11-01 14:01:15 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:01:15.382556 | orchestrator | 2025-11-01 14:01:15 | INFO  | Task 809f3cdb-63f1-4c0d-ac8c-f307210f56ea is in state STARTED 2025-11-01 14:01:15.382567 | orchestrator | 2025-11-01 14:01:15 | INFO  | Task 76ba6249-b5b3-4b2c-8b5e-61a8605b6420 is in state STARTED 2025-11-01 14:01:15.382578 | orchestrator | 2025-11-01 14:01:15 | INFO  | Task 666513bc-b4e0-4274-a9c8-90fbea532cf0 is in state STARTED 2025-11-01 14:01:15.382589 | orchestrator | 2025-11-01 14:01:15 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:01:15.382600 | orchestrator | 2025-11-01 14:01:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:01:18.467599 | orchestrator | 2025-11-01 14:01:18 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:01:18.467693 | orchestrator | 2025-11-01 14:01:18 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:01:18.467705 | orchestrator | 2025-11-01 14:01:18 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:01:18.467715 | orchestrator | 2025-11-01 14:01:18 | INFO  | Task 809f3cdb-63f1-4c0d-ac8c-f307210f56ea is in state STARTED 2025-11-01 14:01:18.467725 | orchestrator | 2025-11-01 14:01:18 | INFO  | Task 76ba6249-b5b3-4b2c-8b5e-61a8605b6420 is in state STARTED 2025-11-01 14:01:18.467735 | orchestrator | 2025-11-01 14:01:18 | INFO  | Task 666513bc-b4e0-4274-a9c8-90fbea532cf0 is in state STARTED 2025-11-01 14:01:18.467745 | orchestrator | 2025-11-01 14:01:18 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:01:18.467756 | orchestrator | 2025-11-01 14:01:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:01:21.627119 | orchestrator | 2025-11-01 14:01:21 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:01:21.627190 | orchestrator | 2025-11-01 14:01:21 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:01:21.627196 | orchestrator | 2025-11-01 14:01:21 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:01:21.627201 | orchestrator | 2025-11-01 14:01:21 | INFO  | Task 809f3cdb-63f1-4c0d-ac8c-f307210f56ea is in state STARTED 2025-11-01 14:01:21.627219 | orchestrator | 2025-11-01 14:01:21 | INFO  | Task 76ba6249-b5b3-4b2c-8b5e-61a8605b6420 is in state STARTED 2025-11-01 14:01:21.627254 | orchestrator | 2025-11-01 14:01:21 | INFO  | Task 666513bc-b4e0-4274-a9c8-90fbea532cf0 is in state STARTED 2025-11-01 14:01:21.627258 | orchestrator | 2025-11-01 14:01:21 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:01:21.627264 | orchestrator | 2025-11-01 14:01:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:01:24.707164 | orchestrator | 2025-11-01 14:01:24 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:01:24.708597 | orchestrator | 2025-11-01 14:01:24 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:01:24.710388 | orchestrator | 2025-11-01 14:01:24 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:01:24.726723 | orchestrator | 2025-11-01 14:01:24 | INFO  | Task 809f3cdb-63f1-4c0d-ac8c-f307210f56ea is in state STARTED 2025-11-01 14:01:24.726762 | orchestrator | 2025-11-01 14:01:24 | INFO  | Task 76ba6249-b5b3-4b2c-8b5e-61a8605b6420 is in state STARTED 2025-11-01 14:01:24.726773 | orchestrator | 2025-11-01 14:01:24 | INFO  | Task 666513bc-b4e0-4274-a9c8-90fbea532cf0 is in state STARTED 2025-11-01 14:01:24.726784 | orchestrator | 2025-11-01 14:01:24 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:01:24.726795 | orchestrator | 2025-11-01 14:01:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:01:27.950814 | orchestrator | 2025-11-01 14:01:27 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:01:27.953474 | orchestrator | 2025-11-01 14:01:27 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:01:27.955503 | orchestrator | 2025-11-01 14:01:27 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:01:27.961572 | orchestrator | 2025-11-01 14:01:27 | INFO  | Task 809f3cdb-63f1-4c0d-ac8c-f307210f56ea is in state SUCCESS 2025-11-01 14:01:27.962623 | orchestrator | 2025-11-01 14:01:27.962651 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-11-01 14:01:27.962663 | orchestrator | 2025-11-01 14:01:27.962675 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-11-01 14:01:27.962687 | orchestrator | Saturday 01 November 2025 14:01:13 +0000 (0:00:00.970) 0:00:00.972 ***** 2025-11-01 14:01:27.962698 | orchestrator | changed: [testbed-manager] 2025-11-01 14:01:27.962711 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:01:27.962722 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:01:27.962733 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:01:27.962744 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:01:27.962755 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:01:27.962766 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:01:27.962777 | orchestrator | 2025-11-01 14:01:27.962788 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-11-01 14:01:27.962800 | orchestrator | Saturday 01 November 2025 14:01:17 +0000 (0:00:03.994) 0:00:04.966 ***** 2025-11-01 14:01:27.962812 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-11-01 14:01:27.962823 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-11-01 14:01:27.962834 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-11-01 14:01:27.962846 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-11-01 14:01:27.962856 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-11-01 14:01:27.962868 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-11-01 14:01:27.962884 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-11-01 14:01:27.962917 | orchestrator | 2025-11-01 14:01:27.962929 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-11-01 14:01:27.962940 | orchestrator | Saturday 01 November 2025 14:01:19 +0000 (0:00:01.883) 0:00:06.850 ***** 2025-11-01 14:01:27.962959 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-01 14:01:17.986622', 'end': '2025-11-01 14:01:17.995110', 'delta': '0:00:00.008488', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-01 14:01:27.962975 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-01 14:01:18.608667', 'end': '2025-11-01 14:01:18.617431', 'delta': '0:00:00.008764', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-01 14:01:27.962987 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-01 14:01:17.979244', 'end': '2025-11-01 14:01:17.982980', 'delta': '0:00:00.003736', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-01 14:01:27.963019 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-01 14:01:18.879840', 'end': '2025-11-01 14:01:18.886227', 'delta': '0:00:00.006387', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-01 14:01:27.963036 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-01 14:01:18.339329', 'end': '2025-11-01 14:01:18.346308', 'delta': '0:00:00.006979', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-01 14:01:27.963060 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-01 14:01:18.252447', 'end': '2025-11-01 14:01:18.260147', 'delta': '0:00:00.007700', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-01 14:01:27.963073 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-01 14:01:19.057351', 'end': '2025-11-01 14:01:19.065582', 'delta': '0:00:00.008231', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-01 14:01:27.963084 | orchestrator | 2025-11-01 14:01:27.963095 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-11-01 14:01:27.963107 | orchestrator | Saturday 01 November 2025 14:01:20 +0000 (0:00:01.034) 0:00:07.884 ***** 2025-11-01 14:01:27.963118 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-11-01 14:01:27.963129 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-11-01 14:01:27.963140 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-11-01 14:01:27.963151 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-11-01 14:01:27.963162 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-11-01 14:01:27.963173 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-11-01 14:01:27.963184 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-11-01 14:01:27.963195 | orchestrator | 2025-11-01 14:01:27.963206 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-11-01 14:01:27.963217 | orchestrator | Saturday 01 November 2025 14:01:21 +0000 (0:00:00.957) 0:00:08.842 ***** 2025-11-01 14:01:27.963230 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-11-01 14:01:27.963242 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-11-01 14:01:27.963254 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-11-01 14:01:27.963266 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-11-01 14:01:27.963279 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-11-01 14:01:27.963291 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-11-01 14:01:27.963304 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-11-01 14:01:27.963316 | orchestrator | 2025-11-01 14:01:27.963329 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:01:27.963349 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:01:27.963370 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:01:27.963383 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:01:27.963415 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:01:27.963428 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:01:27.963441 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:01:27.963458 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:01:27.963471 | orchestrator | 2025-11-01 14:01:27.963483 | orchestrator | 2025-11-01 14:01:27.963496 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:01:27.963509 | orchestrator | Saturday 01 November 2025 14:01:23 +0000 (0:00:02.263) 0:00:11.105 ***** 2025-11-01 14:01:27.963521 | orchestrator | =============================================================================== 2025-11-01 14:01:27.963534 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.99s 2025-11-01 14:01:27.963546 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.26s 2025-11-01 14:01:27.963559 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.88s 2025-11-01 14:01:27.963572 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.03s 2025-11-01 14:01:27.963585 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 0.96s 2025-11-01 14:01:27.976649 | orchestrator | 2025-11-01 14:01:27 | INFO  | Task 76ba6249-b5b3-4b2c-8b5e-61a8605b6420 is in state STARTED 2025-11-01 14:01:27.986307 | orchestrator | 2025-11-01 14:01:27 | INFO  | Task 666513bc-b4e0-4274-a9c8-90fbea532cf0 is in state STARTED 2025-11-01 14:01:28.001806 | orchestrator | 2025-11-01 14:01:27 | INFO  | Task 42ce4e5a-9dd3-466d-ae55-5b993ceb7c5f is in state STARTED 2025-11-01 14:01:28.004871 | orchestrator | 2025-11-01 14:01:28 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:01:28.004912 | orchestrator | 2025-11-01 14:01:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:01:31.201457 | orchestrator | 2025-11-01 14:01:31 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:01:31.201545 | orchestrator | 2025-11-01 14:01:31 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:01:31.201558 | orchestrator | 2025-11-01 14:01:31 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:01:31.201568 | orchestrator | 2025-11-01 14:01:31 | INFO  | Task 76ba6249-b5b3-4b2c-8b5e-61a8605b6420 is in state STARTED 2025-11-01 14:01:31.201578 | orchestrator | 2025-11-01 14:01:31 | INFO  | Task 666513bc-b4e0-4274-a9c8-90fbea532cf0 is in state STARTED 2025-11-01 14:01:31.201587 | orchestrator | 2025-11-01 14:01:31 | INFO  | Task 42ce4e5a-9dd3-466d-ae55-5b993ceb7c5f is in state STARTED 2025-11-01 14:01:31.201597 | orchestrator | 2025-11-01 14:01:31 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:01:31.201606 | orchestrator | 2025-11-01 14:01:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:01:34.224458 | orchestrator | 2025-11-01 14:01:34 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:01:34.227136 | orchestrator | 2025-11-01 14:01:34 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:01:34.227556 | orchestrator | 2025-11-01 14:01:34 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:01:34.228792 | orchestrator | 2025-11-01 14:01:34 | INFO  | Task 76ba6249-b5b3-4b2c-8b5e-61a8605b6420 is in state STARTED 2025-11-01 14:01:34.233002 | orchestrator | 2025-11-01 14:01:34 | INFO  | Task 666513bc-b4e0-4274-a9c8-90fbea532cf0 is in state STARTED 2025-11-01 14:01:34.233472 | orchestrator | 2025-11-01 14:01:34 | INFO  | Task 42ce4e5a-9dd3-466d-ae55-5b993ceb7c5f is in state STARTED 2025-11-01 14:01:34.234085 | orchestrator | 2025-11-01 14:01:34 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:01:34.234283 | orchestrator | 2025-11-01 14:01:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:01:37.329470 | orchestrator | 2025-11-01 14:01:37 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:01:37.329532 | orchestrator | 2025-11-01 14:01:37 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:01:37.329540 | orchestrator | 2025-11-01 14:01:37 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:01:37.329546 | orchestrator | 2025-11-01 14:01:37 | INFO  | Task 76ba6249-b5b3-4b2c-8b5e-61a8605b6420 is in state STARTED 2025-11-01 14:01:37.329552 | orchestrator | 2025-11-01 14:01:37 | INFO  | Task 666513bc-b4e0-4274-a9c8-90fbea532cf0 is in state STARTED 2025-11-01 14:01:37.329557 | orchestrator | 2025-11-01 14:01:37 | INFO  | Task 42ce4e5a-9dd3-466d-ae55-5b993ceb7c5f is in state STARTED 2025-11-01 14:01:37.329563 | orchestrator | 2025-11-01 14:01:37 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:01:37.329570 | orchestrator | 2025-11-01 14:01:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:01:40.377670 | orchestrator | 2025-11-01 14:01:40 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:01:40.379908 | orchestrator | 2025-11-01 14:01:40 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:01:40.382244 | orchestrator | 2025-11-01 14:01:40 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:01:40.384806 | orchestrator | 2025-11-01 14:01:40 | INFO  | Task 76ba6249-b5b3-4b2c-8b5e-61a8605b6420 is in state STARTED 2025-11-01 14:01:40.386770 | orchestrator | 2025-11-01 14:01:40 | INFO  | Task 666513bc-b4e0-4274-a9c8-90fbea532cf0 is in state STARTED 2025-11-01 14:01:40.387481 | orchestrator | 2025-11-01 14:01:40 | INFO  | Task 42ce4e5a-9dd3-466d-ae55-5b993ceb7c5f is in state STARTED 2025-11-01 14:01:40.389802 | orchestrator | 2025-11-01 14:01:40 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:01:40.390105 | orchestrator | 2025-11-01 14:01:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:01:43.541284 | orchestrator | 2025-11-01 14:01:43 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:01:43.548030 | orchestrator | 2025-11-01 14:01:43 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:01:43.548064 | orchestrator | 2025-11-01 14:01:43 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:01:43.548075 | orchestrator | 2025-11-01 14:01:43 | INFO  | Task 76ba6249-b5b3-4b2c-8b5e-61a8605b6420 is in state STARTED 2025-11-01 14:01:43.548085 | orchestrator | 2025-11-01 14:01:43 | INFO  | Task 666513bc-b4e0-4274-a9c8-90fbea532cf0 is in state STARTED 2025-11-01 14:01:43.548121 | orchestrator | 2025-11-01 14:01:43 | INFO  | Task 42ce4e5a-9dd3-466d-ae55-5b993ceb7c5f is in state STARTED 2025-11-01 14:01:43.548131 | orchestrator | 2025-11-01 14:01:43 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:01:43.548141 | orchestrator | 2025-11-01 14:01:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:01:46.615721 | orchestrator | 2025-11-01 14:01:46 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:01:46.615798 | orchestrator | 2025-11-01 14:01:46 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:01:46.615808 | orchestrator | 2025-11-01 14:01:46 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:01:46.615818 | orchestrator | 2025-11-01 14:01:46 | INFO  | Task 76ba6249-b5b3-4b2c-8b5e-61a8605b6420 is in state STARTED 2025-11-01 14:01:46.615827 | orchestrator | 2025-11-01 14:01:46 | INFO  | Task 666513bc-b4e0-4274-a9c8-90fbea532cf0 is in state STARTED 2025-11-01 14:01:46.615836 | orchestrator | 2025-11-01 14:01:46 | INFO  | Task 42ce4e5a-9dd3-466d-ae55-5b993ceb7c5f is in state STARTED 2025-11-01 14:01:46.616576 | orchestrator | 2025-11-01 14:01:46 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:01:46.616594 | orchestrator | 2025-11-01 14:01:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:01:49.729381 | orchestrator | 2025-11-01 14:01:49 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:01:49.729516 | orchestrator | 2025-11-01 14:01:49 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:01:49.729531 | orchestrator | 2025-11-01 14:01:49 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:01:49.729849 | orchestrator | 2025-11-01 14:01:49 | INFO  | Task 76ba6249-b5b3-4b2c-8b5e-61a8605b6420 is in state STARTED 2025-11-01 14:01:49.729863 | orchestrator | 2025-11-01 14:01:49 | INFO  | Task 666513bc-b4e0-4274-a9c8-90fbea532cf0 is in state STARTED 2025-11-01 14:01:49.729874 | orchestrator | 2025-11-01 14:01:49 | INFO  | Task 42ce4e5a-9dd3-466d-ae55-5b993ceb7c5f is in state STARTED 2025-11-01 14:01:49.729885 | orchestrator | 2025-11-01 14:01:49 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:01:49.729896 | orchestrator | 2025-11-01 14:01:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:01:52.969752 | orchestrator | 2025-11-01 14:01:52 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:01:52.969852 | orchestrator | 2025-11-01 14:01:52 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:01:52.969866 | orchestrator | 2025-11-01 14:01:52 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:01:52.969878 | orchestrator | 2025-11-01 14:01:52 | INFO  | Task 76ba6249-b5b3-4b2c-8b5e-61a8605b6420 is in state SUCCESS 2025-11-01 14:01:52.969889 | orchestrator | 2025-11-01 14:01:52 | INFO  | Task 666513bc-b4e0-4274-a9c8-90fbea532cf0 is in state STARTED 2025-11-01 14:01:52.969900 | orchestrator | 2025-11-01 14:01:52 | INFO  | Task 42ce4e5a-9dd3-466d-ae55-5b993ceb7c5f is in state STARTED 2025-11-01 14:01:52.969911 | orchestrator | 2025-11-01 14:01:52 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:01:52.969923 | orchestrator | 2025-11-01 14:01:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:01:56.296863 | orchestrator | 2025-11-01 14:01:55 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:01:56.297021 | orchestrator | 2025-11-01 14:01:55 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:01:56.297044 | orchestrator | 2025-11-01 14:01:55 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:01:56.297061 | orchestrator | 2025-11-01 14:01:55 | INFO  | Task 666513bc-b4e0-4274-a9c8-90fbea532cf0 is in state STARTED 2025-11-01 14:01:56.297077 | orchestrator | 2025-11-01 14:01:55 | INFO  | Task 42ce4e5a-9dd3-466d-ae55-5b993ceb7c5f is in state STARTED 2025-11-01 14:01:56.297105 | orchestrator | 2025-11-01 14:01:55 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:01:56.297122 | orchestrator | 2025-11-01 14:01:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:01:58.944055 | orchestrator | 2025-11-01 14:01:58 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:01:58.944857 | orchestrator | 2025-11-01 14:01:58 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:01:58.946884 | orchestrator | 2025-11-01 14:01:58 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:01:58.948142 | orchestrator | 2025-11-01 14:01:58 | INFO  | Task 666513bc-b4e0-4274-a9c8-90fbea532cf0 is in state STARTED 2025-11-01 14:01:58.995003 | orchestrator | 2025-11-01 14:01:58 | INFO  | Task 42ce4e5a-9dd3-466d-ae55-5b993ceb7c5f is in state STARTED 2025-11-01 14:01:58.995026 | orchestrator | 2025-11-01 14:01:58 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:01:58.995037 | orchestrator | 2025-11-01 14:01:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:02:02.023617 | orchestrator | 2025-11-01 14:02:02 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:02:02.023698 | orchestrator | 2025-11-01 14:02:02 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:02:02.023709 | orchestrator | 2025-11-01 14:02:02 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:02:02.023719 | orchestrator | 2025-11-01 14:02:02 | INFO  | Task 666513bc-b4e0-4274-a9c8-90fbea532cf0 is in state STARTED 2025-11-01 14:02:02.023729 | orchestrator | 2025-11-01 14:02:02 | INFO  | Task 42ce4e5a-9dd3-466d-ae55-5b993ceb7c5f is in state STARTED 2025-11-01 14:02:02.023739 | orchestrator | 2025-11-01 14:02:02 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:02:02.023749 | orchestrator | 2025-11-01 14:02:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:02:05.363707 | orchestrator | 2025-11-01 14:02:05 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:02:05.363793 | orchestrator | 2025-11-01 14:02:05 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:02:05.363800 | orchestrator | 2025-11-01 14:02:05 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:02:05.363806 | orchestrator | 2025-11-01 14:02:05 | INFO  | Task 666513bc-b4e0-4274-a9c8-90fbea532cf0 is in state STARTED 2025-11-01 14:02:05.363812 | orchestrator | 2025-11-01 14:02:05 | INFO  | Task 42ce4e5a-9dd3-466d-ae55-5b993ceb7c5f is in state STARTED 2025-11-01 14:02:05.363818 | orchestrator | 2025-11-01 14:02:05 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:02:05.363825 | orchestrator | 2025-11-01 14:02:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:02:08.255901 | orchestrator | 2025-11-01 14:02:08 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:02:08.256019 | orchestrator | 2025-11-01 14:02:08 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:02:08.256033 | orchestrator | 2025-11-01 14:02:08 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:02:08.256044 | orchestrator | 2025-11-01 14:02:08 | INFO  | Task 666513bc-b4e0-4274-a9c8-90fbea532cf0 is in state STARTED 2025-11-01 14:02:08.256055 | orchestrator | 2025-11-01 14:02:08 | INFO  | Task 42ce4e5a-9dd3-466d-ae55-5b993ceb7c5f is in state STARTED 2025-11-01 14:02:08.256066 | orchestrator | 2025-11-01 14:02:08 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:02:08.256077 | orchestrator | 2025-11-01 14:02:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:02:11.273442 | orchestrator | 2025-11-01 14:02:11 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:02:11.275913 | orchestrator | 2025-11-01 14:02:11 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:02:11.277507 | orchestrator | 2025-11-01 14:02:11 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:02:11.277531 | orchestrator | 2025-11-01 14:02:11 | INFO  | Task 666513bc-b4e0-4274-a9c8-90fbea532cf0 is in state SUCCESS 2025-11-01 14:02:11.280828 | orchestrator | 2025-11-01 14:02:11 | INFO  | Task 42ce4e5a-9dd3-466d-ae55-5b993ceb7c5f is in state STARTED 2025-11-01 14:02:11.282115 | orchestrator | 2025-11-01 14:02:11 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:02:11.282136 | orchestrator | 2025-11-01 14:02:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:02:14.373247 | orchestrator | 2025-11-01 14:02:14 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:02:14.373338 | orchestrator | 2025-11-01 14:02:14 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:02:14.376362 | orchestrator | 2025-11-01 14:02:14 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:02:14.376384 | orchestrator | 2025-11-01 14:02:14 | INFO  | Task 42ce4e5a-9dd3-466d-ae55-5b993ceb7c5f is in state STARTED 2025-11-01 14:02:14.376699 | orchestrator | 2025-11-01 14:02:14 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:02:14.376720 | orchestrator | 2025-11-01 14:02:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:02:17.428308 | orchestrator | 2025-11-01 14:02:17 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:02:17.433373 | orchestrator | 2025-11-01 14:02:17 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:02:17.433440 | orchestrator | 2025-11-01 14:02:17 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:02:17.435784 | orchestrator | 2025-11-01 14:02:17 | INFO  | Task 42ce4e5a-9dd3-466d-ae55-5b993ceb7c5f is in state STARTED 2025-11-01 14:02:17.435807 | orchestrator | 2025-11-01 14:02:17 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:02:17.435818 | orchestrator | 2025-11-01 14:02:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:02:20.530592 | orchestrator | 2025-11-01 14:02:20 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:02:20.533329 | orchestrator | 2025-11-01 14:02:20 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:02:20.536319 | orchestrator | 2025-11-01 14:02:20 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:02:20.536597 | orchestrator | 2025-11-01 14:02:20 | INFO  | Task 42ce4e5a-9dd3-466d-ae55-5b993ceb7c5f is in state STARTED 2025-11-01 14:02:20.538156 | orchestrator | 2025-11-01 14:02:20 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:02:20.538178 | orchestrator | 2025-11-01 14:02:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:02:23.615781 | orchestrator | 2025-11-01 14:02:23 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:02:23.618182 | orchestrator | 2025-11-01 14:02:23 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:02:23.620158 | orchestrator | 2025-11-01 14:02:23 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:02:23.623660 | orchestrator | 2025-11-01 14:02:23 | INFO  | Task 42ce4e5a-9dd3-466d-ae55-5b993ceb7c5f is in state STARTED 2025-11-01 14:02:23.624573 | orchestrator | 2025-11-01 14:02:23 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:02:23.625948 | orchestrator | 2025-11-01 14:02:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:02:26.726973 | orchestrator | 2025-11-01 14:02:26 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:02:26.729125 | orchestrator | 2025-11-01 14:02:26 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:02:26.732376 | orchestrator | 2025-11-01 14:02:26 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:02:26.735263 | orchestrator | 2025-11-01 14:02:26 | INFO  | Task 42ce4e5a-9dd3-466d-ae55-5b993ceb7c5f is in state STARTED 2025-11-01 14:02:26.737054 | orchestrator | 2025-11-01 14:02:26 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:02:26.738171 | orchestrator | 2025-11-01 14:02:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:02:29.864302 | orchestrator | 2025-11-01 14:02:29 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:02:29.864919 | orchestrator | 2025-11-01 14:02:29 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:02:29.865660 | orchestrator | 2025-11-01 14:02:29 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:02:29.866517 | orchestrator | 2025-11-01 14:02:29 | INFO  | Task 42ce4e5a-9dd3-466d-ae55-5b993ceb7c5f is in state STARTED 2025-11-01 14:02:29.867250 | orchestrator | 2025-11-01 14:02:29 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:02:29.867574 | orchestrator | 2025-11-01 14:02:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:02:32.983126 | orchestrator | 2025-11-01 14:02:32 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:02:32.987910 | orchestrator | 2025-11-01 14:02:32 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:02:32.991926 | orchestrator | 2025-11-01 14:02:32 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:02:32.993554 | orchestrator | 2025-11-01 14:02:32 | INFO  | Task 42ce4e5a-9dd3-466d-ae55-5b993ceb7c5f is in state STARTED 2025-11-01 14:02:33.001335 | orchestrator | 2025-11-01 14:02:32 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:02:33.001368 | orchestrator | 2025-11-01 14:02:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:02:36.047929 | orchestrator | 2025-11-01 14:02:36 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state STARTED 2025-11-01 14:02:36.051249 | orchestrator | 2025-11-01 14:02:36 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:02:36.052948 | orchestrator | 2025-11-01 14:02:36 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:02:36.054163 | orchestrator | 2025-11-01 14:02:36 | INFO  | Task 42ce4e5a-9dd3-466d-ae55-5b993ceb7c5f is in state SUCCESS 2025-11-01 14:02:36.055881 | orchestrator | 2025-11-01 14:02:36.055920 | orchestrator | 2025-11-01 14:02:36.055932 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-11-01 14:02:36.055944 | orchestrator | 2025-11-01 14:02:36.055955 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-11-01 14:02:36.055966 | orchestrator | Saturday 01 November 2025 14:01:13 +0000 (0:00:00.274) 0:00:00.274 ***** 2025-11-01 14:02:36.055977 | orchestrator | ok: [testbed-manager] => { 2025-11-01 14:02:36.055990 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-11-01 14:02:36.056081 | orchestrator | } 2025-11-01 14:02:36.056093 | orchestrator | 2025-11-01 14:02:36.056104 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-11-01 14:02:36.056115 | orchestrator | Saturday 01 November 2025 14:01:13 +0000 (0:00:00.260) 0:00:00.534 ***** 2025-11-01 14:02:36.056125 | orchestrator | ok: [testbed-manager] 2025-11-01 14:02:36.056137 | orchestrator | 2025-11-01 14:02:36.056148 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-11-01 14:02:36.056159 | orchestrator | Saturday 01 November 2025 14:01:14 +0000 (0:00:00.998) 0:00:01.533 ***** 2025-11-01 14:02:36.056170 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-11-01 14:02:36.056181 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-11-01 14:02:36.056191 | orchestrator | 2025-11-01 14:02:36.056202 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-11-01 14:02:36.056213 | orchestrator | Saturday 01 November 2025 14:01:15 +0000 (0:00:01.142) 0:00:02.675 ***** 2025-11-01 14:02:36.056223 | orchestrator | changed: [testbed-manager] 2025-11-01 14:02:36.056234 | orchestrator | 2025-11-01 14:02:36.056245 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-11-01 14:02:36.056255 | orchestrator | Saturday 01 November 2025 14:01:17 +0000 (0:00:01.899) 0:00:04.575 ***** 2025-11-01 14:02:36.056266 | orchestrator | changed: [testbed-manager] 2025-11-01 14:02:36.056277 | orchestrator | 2025-11-01 14:02:36.056287 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-11-01 14:02:36.056298 | orchestrator | Saturday 01 November 2025 14:01:18 +0000 (0:00:01.306) 0:00:05.881 ***** 2025-11-01 14:02:36.056309 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-11-01 14:02:36.056319 | orchestrator | ok: [testbed-manager] 2025-11-01 14:02:36.056330 | orchestrator | 2025-11-01 14:02:36.056340 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-11-01 14:02:36.056351 | orchestrator | Saturday 01 November 2025 14:01:46 +0000 (0:00:27.813) 0:00:33.695 ***** 2025-11-01 14:02:36.056362 | orchestrator | changed: [testbed-manager] 2025-11-01 14:02:36.056372 | orchestrator | 2025-11-01 14:02:36.056383 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:02:36.056394 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:02:36.056430 | orchestrator | 2025-11-01 14:02:36.056441 | orchestrator | 2025-11-01 14:02:36.056452 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:02:36.056463 | orchestrator | Saturday 01 November 2025 14:01:51 +0000 (0:00:05.310) 0:00:39.006 ***** 2025-11-01 14:02:36.056473 | orchestrator | =============================================================================== 2025-11-01 14:02:36.056484 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 27.81s 2025-11-01 14:02:36.056495 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 5.31s 2025-11-01 14:02:36.056519 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.90s 2025-11-01 14:02:36.056530 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.31s 2025-11-01 14:02:36.056547 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.14s 2025-11-01 14:02:36.056559 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.00s 2025-11-01 14:02:36.056569 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.26s 2025-11-01 14:02:36.056580 | orchestrator | 2025-11-01 14:02:36.056591 | orchestrator | 2025-11-01 14:02:36.056602 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-11-01 14:02:36.056612 | orchestrator | 2025-11-01 14:02:36.056623 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-11-01 14:02:36.056634 | orchestrator | Saturday 01 November 2025 14:01:13 +0000 (0:00:00.749) 0:00:00.749 ***** 2025-11-01 14:02:36.056645 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-11-01 14:02:36.056657 | orchestrator | 2025-11-01 14:02:36.056668 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-11-01 14:02:36.056679 | orchestrator | Saturday 01 November 2025 14:01:14 +0000 (0:00:00.954) 0:00:01.704 ***** 2025-11-01 14:02:36.056689 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-11-01 14:02:36.056700 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-11-01 14:02:36.056714 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-11-01 14:02:36.056727 | orchestrator | 2025-11-01 14:02:36.056739 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-11-01 14:02:36.056751 | orchestrator | Saturday 01 November 2025 14:01:16 +0000 (0:00:01.967) 0:00:03.671 ***** 2025-11-01 14:02:36.056764 | orchestrator | changed: [testbed-manager] 2025-11-01 14:02:36.056776 | orchestrator | 2025-11-01 14:02:36.056788 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-11-01 14:02:36.056800 | orchestrator | Saturday 01 November 2025 14:01:19 +0000 (0:00:02.802) 0:00:06.473 ***** 2025-11-01 14:02:36.056824 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-11-01 14:02:36.056838 | orchestrator | ok: [testbed-manager] 2025-11-01 14:02:36.056851 | orchestrator | 2025-11-01 14:02:36.056864 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-11-01 14:02:36.056876 | orchestrator | Saturday 01 November 2025 14:01:53 +0000 (0:00:34.359) 0:00:40.833 ***** 2025-11-01 14:02:36.056888 | orchestrator | changed: [testbed-manager] 2025-11-01 14:02:36.056900 | orchestrator | 2025-11-01 14:02:36.056913 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-11-01 14:02:36.056926 | orchestrator | Saturday 01 November 2025 14:01:57 +0000 (0:00:03.629) 0:00:44.463 ***** 2025-11-01 14:02:36.056938 | orchestrator | ok: [testbed-manager] 2025-11-01 14:02:36.056951 | orchestrator | 2025-11-01 14:02:36.056963 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-11-01 14:02:36.056975 | orchestrator | Saturday 01 November 2025 14:01:58 +0000 (0:00:01.220) 0:00:45.684 ***** 2025-11-01 14:02:36.056988 | orchestrator | changed: [testbed-manager] 2025-11-01 14:02:36.056999 | orchestrator | 2025-11-01 14:02:36.057011 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-11-01 14:02:36.057024 | orchestrator | Saturday 01 November 2025 14:02:02 +0000 (0:00:03.595) 0:00:49.279 ***** 2025-11-01 14:02:36.057036 | orchestrator | changed: [testbed-manager] 2025-11-01 14:02:36.057048 | orchestrator | 2025-11-01 14:02:36.057061 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-11-01 14:02:36.057072 | orchestrator | Saturday 01 November 2025 14:02:03 +0000 (0:00:01.371) 0:00:50.651 ***** 2025-11-01 14:02:36.057089 | orchestrator | changed: [testbed-manager] 2025-11-01 14:02:36.057100 | orchestrator | 2025-11-01 14:02:36.057111 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-11-01 14:02:36.057121 | orchestrator | Saturday 01 November 2025 14:02:06 +0000 (0:00:02.666) 0:00:53.318 ***** 2025-11-01 14:02:36.057132 | orchestrator | ok: [testbed-manager] 2025-11-01 14:02:36.057143 | orchestrator | 2025-11-01 14:02:36.057153 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:02:36.057164 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:02:36.057175 | orchestrator | 2025-11-01 14:02:36.057185 | orchestrator | 2025-11-01 14:02:36.057196 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:02:36.057207 | orchestrator | Saturday 01 November 2025 14:02:08 +0000 (0:00:01.644) 0:00:54.962 ***** 2025-11-01 14:02:36.057217 | orchestrator | =============================================================================== 2025-11-01 14:02:36.057228 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.36s 2025-11-01 14:02:36.057239 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 3.63s 2025-11-01 14:02:36.057249 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.60s 2025-11-01 14:02:36.057260 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.80s 2025-11-01 14:02:36.057270 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 2.67s 2025-11-01 14:02:36.057281 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.97s 2025-11-01 14:02:36.057292 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.64s 2025-11-01 14:02:36.057302 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.37s 2025-11-01 14:02:36.057313 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.22s 2025-11-01 14:02:36.057324 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.95s 2025-11-01 14:02:36.057334 | orchestrator | 2025-11-01 14:02:36.057345 | orchestrator | 2025-11-01 14:02:36.057360 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-11-01 14:02:36.057371 | orchestrator | 2025-11-01 14:02:36.057381 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-11-01 14:02:36.057392 | orchestrator | Saturday 01 November 2025 14:01:31 +0000 (0:00:00.195) 0:00:00.195 ***** 2025-11-01 14:02:36.057431 | orchestrator | ok: [testbed-manager] 2025-11-01 14:02:36.057442 | orchestrator | 2025-11-01 14:02:36.057453 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-11-01 14:02:36.057464 | orchestrator | Saturday 01 November 2025 14:01:32 +0000 (0:00:00.925) 0:00:01.121 ***** 2025-11-01 14:02:36.057475 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-11-01 14:02:36.057485 | orchestrator | 2025-11-01 14:02:36.057496 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-11-01 14:02:36.057507 | orchestrator | Saturday 01 November 2025 14:01:32 +0000 (0:00:00.860) 0:00:01.981 ***** 2025-11-01 14:02:36.057517 | orchestrator | changed: [testbed-manager] 2025-11-01 14:02:36.057528 | orchestrator | 2025-11-01 14:02:36.057539 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-11-01 14:02:36.057550 | orchestrator | Saturday 01 November 2025 14:01:34 +0000 (0:00:01.045) 0:00:03.027 ***** 2025-11-01 14:02:36.057560 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-11-01 14:02:36.057571 | orchestrator | ok: [testbed-manager] 2025-11-01 14:02:36.057582 | orchestrator | 2025-11-01 14:02:36.057592 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-11-01 14:02:36.057603 | orchestrator | Saturday 01 November 2025 14:02:30 +0000 (0:00:56.337) 0:00:59.365 ***** 2025-11-01 14:02:36.057614 | orchestrator | changed: [testbed-manager] 2025-11-01 14:02:36.057625 | orchestrator | 2025-11-01 14:02:36.057642 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:02:36.057653 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:02:36.057663 | orchestrator | 2025-11-01 14:02:36.057674 | orchestrator | 2025-11-01 14:02:36.057685 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:02:36.057701 | orchestrator | Saturday 01 November 2025 14:02:34 +0000 (0:00:04.482) 0:01:03.847 ***** 2025-11-01 14:02:36.057713 | orchestrator | =============================================================================== 2025-11-01 14:02:36.057723 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 56.34s 2025-11-01 14:02:36.057734 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.48s 2025-11-01 14:02:36.057745 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.05s 2025-11-01 14:02:36.057755 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.93s 2025-11-01 14:02:36.057766 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.86s 2025-11-01 14:02:36.057777 | orchestrator | 2025-11-01 14:02:36 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:02:36.057787 | orchestrator | 2025-11-01 14:02:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:02:39.098648 | orchestrator | 2025-11-01 14:02:39.098737 | orchestrator | 2025-11-01 14:02:39.098752 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 14:02:39.098764 | orchestrator | 2025-11-01 14:02:39.098775 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 14:02:39.098787 | orchestrator | Saturday 01 November 2025 14:01:11 +0000 (0:00:00.639) 0:00:00.639 ***** 2025-11-01 14:02:39.098799 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-11-01 14:02:39.098810 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-11-01 14:02:39.098820 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-11-01 14:02:39.098831 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-11-01 14:02:39.098841 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-11-01 14:02:39.098852 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-11-01 14:02:39.098862 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-11-01 14:02:39.098873 | orchestrator | 2025-11-01 14:02:39.098883 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-11-01 14:02:39.098894 | orchestrator | 2025-11-01 14:02:39.098904 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-11-01 14:02:39.098915 | orchestrator | Saturday 01 November 2025 14:01:13 +0000 (0:00:02.295) 0:00:02.934 ***** 2025-11-01 14:02:39.098946 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:02:39.098966 | orchestrator | 2025-11-01 14:02:39.098977 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-11-01 14:02:39.098988 | orchestrator | Saturday 01 November 2025 14:01:16 +0000 (0:00:02.148) 0:00:05.083 ***** 2025-11-01 14:02:39.098999 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:02:39.099011 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:02:39.099022 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:02:39.099032 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:02:39.099043 | orchestrator | ok: [testbed-manager] 2025-11-01 14:02:39.099053 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:02:39.099064 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:02:39.099074 | orchestrator | 2025-11-01 14:02:39.099085 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-11-01 14:02:39.099096 | orchestrator | Saturday 01 November 2025 14:01:18 +0000 (0:00:02.192) 0:00:07.275 ***** 2025-11-01 14:02:39.099129 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:02:39.099140 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:02:39.099151 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:02:39.099162 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:02:39.099175 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:02:39.099188 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:02:39.099201 | orchestrator | ok: [testbed-manager] 2025-11-01 14:02:39.099213 | orchestrator | 2025-11-01 14:02:39.099226 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-11-01 14:02:39.099238 | orchestrator | Saturday 01 November 2025 14:01:21 +0000 (0:00:03.483) 0:00:10.758 ***** 2025-11-01 14:02:39.099251 | orchestrator | changed: [testbed-manager] 2025-11-01 14:02:39.099264 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:02:39.099276 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:02:39.099288 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:02:39.099301 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:02:39.099314 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:02:39.099326 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:02:39.099339 | orchestrator | 2025-11-01 14:02:39.099351 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-11-01 14:02:39.099364 | orchestrator | Saturday 01 November 2025 14:01:23 +0000 (0:00:02.237) 0:00:12.996 ***** 2025-11-01 14:02:39.099376 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:02:39.099389 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:02:39.099429 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:02:39.099443 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:02:39.099455 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:02:39.099468 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:02:39.099480 | orchestrator | changed: [testbed-manager] 2025-11-01 14:02:39.099493 | orchestrator | 2025-11-01 14:02:39.099506 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-11-01 14:02:39.099519 | orchestrator | Saturday 01 November 2025 14:01:37 +0000 (0:00:13.225) 0:00:26.222 ***** 2025-11-01 14:02:39.099530 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:02:39.099540 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:02:39.099551 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:02:39.099562 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:02:39.099572 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:02:39.099583 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:02:39.099593 | orchestrator | changed: [testbed-manager] 2025-11-01 14:02:39.099604 | orchestrator | 2025-11-01 14:02:39.099614 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-11-01 14:02:39.099625 | orchestrator | Saturday 01 November 2025 14:02:11 +0000 (0:00:34.728) 0:01:00.950 ***** 2025-11-01 14:02:39.099679 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:02:39.099693 | orchestrator | 2025-11-01 14:02:39.099704 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-11-01 14:02:39.099715 | orchestrator | Saturday 01 November 2025 14:02:13 +0000 (0:00:01.502) 0:01:02.453 ***** 2025-11-01 14:02:39.099726 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-11-01 14:02:39.099737 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-11-01 14:02:39.099748 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-11-01 14:02:39.099759 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-11-01 14:02:39.099787 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-11-01 14:02:39.099799 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-11-01 14:02:39.099809 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-11-01 14:02:39.099820 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-11-01 14:02:39.099839 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-11-01 14:02:39.099850 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-11-01 14:02:39.099860 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-11-01 14:02:39.099871 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-11-01 14:02:39.099881 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-11-01 14:02:39.099892 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-11-01 14:02:39.099902 | orchestrator | 2025-11-01 14:02:39.099913 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-11-01 14:02:39.099925 | orchestrator | Saturday 01 November 2025 14:02:19 +0000 (0:00:05.934) 0:01:08.387 ***** 2025-11-01 14:02:39.099936 | orchestrator | ok: [testbed-manager] 2025-11-01 14:02:39.099947 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:02:39.099957 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:02:39.099968 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:02:39.099979 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:02:39.099989 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:02:39.100000 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:02:39.100011 | orchestrator | 2025-11-01 14:02:39.100022 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-11-01 14:02:39.100033 | orchestrator | Saturday 01 November 2025 14:02:22 +0000 (0:00:02.703) 0:01:11.090 ***** 2025-11-01 14:02:39.100043 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:02:39.100054 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:02:39.100065 | orchestrator | changed: [testbed-manager] 2025-11-01 14:02:39.100075 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:02:39.100086 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:02:39.100096 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:02:39.100107 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:02:39.100117 | orchestrator | 2025-11-01 14:02:39.100128 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-11-01 14:02:39.100139 | orchestrator | Saturday 01 November 2025 14:02:24 +0000 (0:00:02.580) 0:01:13.671 ***** 2025-11-01 14:02:39.100149 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:02:39.100160 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:02:39.100170 | orchestrator | ok: [testbed-manager] 2025-11-01 14:02:39.100181 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:02:39.100191 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:02:39.100202 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:02:39.100212 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:02:39.100223 | orchestrator | 2025-11-01 14:02:39.100233 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-11-01 14:02:39.100249 | orchestrator | Saturday 01 November 2025 14:02:26 +0000 (0:00:01.625) 0:01:15.296 ***** 2025-11-01 14:02:39.100260 | orchestrator | ok: [testbed-manager] 2025-11-01 14:02:39.100271 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:02:39.100282 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:02:39.100292 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:02:39.100303 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:02:39.100313 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:02:39.100323 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:02:39.100334 | orchestrator | 2025-11-01 14:02:39.100344 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-11-01 14:02:39.100355 | orchestrator | Saturday 01 November 2025 14:02:28 +0000 (0:00:02.442) 0:01:17.739 ***** 2025-11-01 14:02:39.100366 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-11-01 14:02:39.100380 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:02:39.100391 | orchestrator | 2025-11-01 14:02:39.100421 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-11-01 14:02:39.100440 | orchestrator | Saturday 01 November 2025 14:02:30 +0000 (0:00:01.706) 0:01:19.445 ***** 2025-11-01 14:02:39.100451 | orchestrator | changed: [testbed-manager] 2025-11-01 14:02:39.100462 | orchestrator | 2025-11-01 14:02:39.100472 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-11-01 14:02:39.100483 | orchestrator | Saturday 01 November 2025 14:02:32 +0000 (0:00:02.588) 0:01:22.034 ***** 2025-11-01 14:02:39.100493 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:02:39.100504 | orchestrator | changed: [testbed-manager] 2025-11-01 14:02:39.100515 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:02:39.100525 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:02:39.100535 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:02:39.100546 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:02:39.100557 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:02:39.100567 | orchestrator | 2025-11-01 14:02:39.100578 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:02:39.100589 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:02:39.100601 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:02:39.100611 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:02:39.100622 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:02:39.100640 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:02:39.100651 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:02:39.100662 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:02:39.100672 | orchestrator | 2025-11-01 14:02:39.100683 | orchestrator | 2025-11-01 14:02:39.100694 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:02:39.100704 | orchestrator | Saturday 01 November 2025 14:02:36 +0000 (0:00:03.662) 0:01:25.696 ***** 2025-11-01 14:02:39.100715 | orchestrator | =============================================================================== 2025-11-01 14:02:39.100726 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 34.73s 2025-11-01 14:02:39.100736 | orchestrator | osism.services.netdata : Add repository -------------------------------- 13.23s 2025-11-01 14:02:39.100747 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.93s 2025-11-01 14:02:39.100757 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.66s 2025-11-01 14:02:39.100768 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.48s 2025-11-01 14:02:39.100779 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 2.70s 2025-11-01 14:02:39.100789 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.59s 2025-11-01 14:02:39.100800 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.58s 2025-11-01 14:02:39.100810 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.44s 2025-11-01 14:02:39.100821 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.30s 2025-11-01 14:02:39.100832 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.24s 2025-11-01 14:02:39.100842 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.19s 2025-11-01 14:02:39.100853 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.15s 2025-11-01 14:02:39.100869 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.71s 2025-11-01 14:02:39.100880 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.63s 2025-11-01 14:02:39.100895 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.50s 2025-11-01 14:02:39.100907 | orchestrator | 2025-11-01 14:02:39 | INFO  | Task d914f1a1-422c-4c38-84e2-3feae7074def is in state SUCCESS 2025-11-01 14:02:39.101478 | orchestrator | 2025-11-01 14:02:39 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:02:39.104389 | orchestrator | 2025-11-01 14:02:39 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:02:39.109530 | orchestrator | 2025-11-01 14:02:39 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:02:39.109553 | orchestrator | 2025-11-01 14:02:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:02:42.149300 | orchestrator | 2025-11-01 14:02:42 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:02:42.150304 | orchestrator | 2025-11-01 14:02:42 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:02:42.152785 | orchestrator | 2025-11-01 14:02:42 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:02:42.153715 | orchestrator | 2025-11-01 14:02:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:02:45.207778 | orchestrator | 2025-11-01 14:02:45 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:02:45.208879 | orchestrator | 2025-11-01 14:02:45 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:02:45.210924 | orchestrator | 2025-11-01 14:02:45 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:02:45.211037 | orchestrator | 2025-11-01 14:02:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:02:48.248196 | orchestrator | 2025-11-01 14:02:48 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:02:48.253578 | orchestrator | 2025-11-01 14:02:48 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:02:48.255091 | orchestrator | 2025-11-01 14:02:48 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:02:48.255112 | orchestrator | 2025-11-01 14:02:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:02:51.314255 | orchestrator | 2025-11-01 14:02:51 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:02:51.314342 | orchestrator | 2025-11-01 14:02:51 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:02:51.314356 | orchestrator | 2025-11-01 14:02:51 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:02:51.314368 | orchestrator | 2025-11-01 14:02:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:02:54.353091 | orchestrator | 2025-11-01 14:02:54 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:02:54.354601 | orchestrator | 2025-11-01 14:02:54 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:02:54.358981 | orchestrator | 2025-11-01 14:02:54 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:02:54.359001 | orchestrator | 2025-11-01 14:02:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:02:57.450674 | orchestrator | 2025-11-01 14:02:57 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:02:57.451657 | orchestrator | 2025-11-01 14:02:57 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:02:57.453151 | orchestrator | 2025-11-01 14:02:57 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:02:57.453181 | orchestrator | 2025-11-01 14:02:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:03:00.519231 | orchestrator | 2025-11-01 14:03:00 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:03:00.519332 | orchestrator | 2025-11-01 14:03:00 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:03:00.521215 | orchestrator | 2025-11-01 14:03:00 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:03:00.521446 | orchestrator | 2025-11-01 14:03:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:03:03.592145 | orchestrator | 2025-11-01 14:03:03 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:03:03.592894 | orchestrator | 2025-11-01 14:03:03 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:03:03.594631 | orchestrator | 2025-11-01 14:03:03 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:03:03.594658 | orchestrator | 2025-11-01 14:03:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:03:06.626902 | orchestrator | 2025-11-01 14:03:06 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:03:06.627950 | orchestrator | 2025-11-01 14:03:06 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:03:06.628401 | orchestrator | 2025-11-01 14:03:06 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:03:06.628473 | orchestrator | 2025-11-01 14:03:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:03:09.667288 | orchestrator | 2025-11-01 14:03:09 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:03:09.667375 | orchestrator | 2025-11-01 14:03:09 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:03:09.667988 | orchestrator | 2025-11-01 14:03:09 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:03:09.668012 | orchestrator | 2025-11-01 14:03:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:03:12.702366 | orchestrator | 2025-11-01 14:03:12 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:03:12.705076 | orchestrator | 2025-11-01 14:03:12 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:03:12.705299 | orchestrator | 2025-11-01 14:03:12 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:03:12.705321 | orchestrator | 2025-11-01 14:03:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:03:15.759556 | orchestrator | 2025-11-01 14:03:15 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:03:15.760340 | orchestrator | 2025-11-01 14:03:15 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:03:15.761613 | orchestrator | 2025-11-01 14:03:15 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:03:15.761633 | orchestrator | 2025-11-01 14:03:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:03:18.799952 | orchestrator | 2025-11-01 14:03:18 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:03:18.800046 | orchestrator | 2025-11-01 14:03:18 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:03:18.800816 | orchestrator | 2025-11-01 14:03:18 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:03:18.800841 | orchestrator | 2025-11-01 14:03:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:03:21.833700 | orchestrator | 2025-11-01 14:03:21 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:03:21.834196 | orchestrator | 2025-11-01 14:03:21 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:03:21.835621 | orchestrator | 2025-11-01 14:03:21 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:03:21.835642 | orchestrator | 2025-11-01 14:03:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:03:24.875961 | orchestrator | 2025-11-01 14:03:24 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:03:24.877250 | orchestrator | 2025-11-01 14:03:24 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:03:24.878839 | orchestrator | 2025-11-01 14:03:24 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:03:24.879212 | orchestrator | 2025-11-01 14:03:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:03:27.922480 | orchestrator | 2025-11-01 14:03:27 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:03:27.924107 | orchestrator | 2025-11-01 14:03:27 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:03:27.925648 | orchestrator | 2025-11-01 14:03:27 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:03:27.925669 | orchestrator | 2025-11-01 14:03:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:03:30.965522 | orchestrator | 2025-11-01 14:03:30 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:03:30.965949 | orchestrator | 2025-11-01 14:03:30 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:03:30.966732 | orchestrator | 2025-11-01 14:03:30 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:03:30.966761 | orchestrator | 2025-11-01 14:03:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:03:33.998206 | orchestrator | 2025-11-01 14:03:33 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:03:33.998991 | orchestrator | 2025-11-01 14:03:33 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:03:34.000573 | orchestrator | 2025-11-01 14:03:33 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state STARTED 2025-11-01 14:03:34.000599 | orchestrator | 2025-11-01 14:03:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:03:37.033546 | orchestrator | 2025-11-01 14:03:37 | INFO  | Task deda0de2-f2a5-4841-be63-21af5f11ea06 is in state STARTED 2025-11-01 14:03:37.034124 | orchestrator | 2025-11-01 14:03:37 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:03:37.034814 | orchestrator | 2025-11-01 14:03:37 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:03:37.035384 | orchestrator | 2025-11-01 14:03:37 | INFO  | Task 58121895-b68c-405d-a03a-578761b354a7 is in state STARTED 2025-11-01 14:03:37.036266 | orchestrator | 2025-11-01 14:03:37 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:03:37.037112 | orchestrator | 2025-11-01 14:03:37 | INFO  | Task 216bea31-7672-4a57-afea-307652db1862 is in state STARTED 2025-11-01 14:03:37.039779 | orchestrator | 2025-11-01 14:03:37 | INFO  | Task 1c7aefcf-1a3f-438a-a28d-fa63f5eaca0c is in state SUCCESS 2025-11-01 14:03:37.043080 | orchestrator | 2025-11-01 14:03:37.043121 | orchestrator | 2025-11-01 14:03:37.043134 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-11-01 14:03:37.043145 | orchestrator | 2025-11-01 14:03:37.043156 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-11-01 14:03:37.043167 | orchestrator | Saturday 01 November 2025 14:01:02 +0000 (0:00:00.329) 0:00:00.329 ***** 2025-11-01 14:03:37.043179 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:03:37.043192 | orchestrator | 2025-11-01 14:03:37.043203 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-11-01 14:03:37.043214 | orchestrator | Saturday 01 November 2025 14:01:04 +0000 (0:00:01.645) 0:00:01.975 ***** 2025-11-01 14:03:37.043225 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-11-01 14:03:37.043236 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-11-01 14:03:37.043247 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-11-01 14:03:37.043258 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-11-01 14:03:37.043269 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-11-01 14:03:37.043279 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-11-01 14:03:37.043290 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-11-01 14:03:37.043301 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-11-01 14:03:37.043337 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-11-01 14:03:37.043349 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-11-01 14:03:37.043361 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-11-01 14:03:37.043372 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-11-01 14:03:37.043384 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-11-01 14:03:37.043397 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-11-01 14:03:37.043408 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-11-01 14:03:37.043452 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-11-01 14:03:37.043463 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-11-01 14:03:37.043474 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-11-01 14:03:37.043484 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-11-01 14:03:37.043495 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-11-01 14:03:37.043506 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-11-01 14:03:37.043516 | orchestrator | 2025-11-01 14:03:37.043527 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-11-01 14:03:37.043538 | orchestrator | Saturday 01 November 2025 14:01:09 +0000 (0:00:04.782) 0:00:06.758 ***** 2025-11-01 14:03:37.043556 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:03:37.043569 | orchestrator | 2025-11-01 14:03:37.043579 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-11-01 14:03:37.043604 | orchestrator | Saturday 01 November 2025 14:01:10 +0000 (0:00:01.466) 0:00:08.224 ***** 2025-11-01 14:03:37.043620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 14:03:37.043637 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 14:03:37.043701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 14:03:37.043718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 14:03:37.043733 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.043747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.043765 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 14:03:37.043786 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 14:03:37.043799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.043843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.043858 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 14:03:37.043872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.043899 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.043912 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.043926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.043960 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.043974 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.044018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.044032 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.044044 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.044055 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.044066 | orchestrator | 2025-11-01 14:03:37.044078 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-11-01 14:03:37.044089 | orchestrator | Saturday 01 November 2025 14:01:15 +0000 (0:00:05.235) 0:00:13.460 ***** 2025-11-01 14:03:37.044100 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-01 14:03:37.044125 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.044137 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.044149 | orchestrator | skipping: [testbed-manager] 2025-11-01 14:03:37.044161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-01 14:03:37.044208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.044222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.044233 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:03:37.044245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-01 14:03:37.044256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.044275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.044286 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:03:37.044301 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-01 14:03:37.044313 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.044324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.044336 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:03:37.044352 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-01 14:03:37.044364 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.044376 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.044387 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:03:37.044399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-01 14:03:37.044438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.044454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.044466 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:03:37.044477 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-01 14:03:37.044496 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.044508 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.044519 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:03:37.044530 | orchestrator | 2025-11-01 14:03:37.044541 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-11-01 14:03:37.044552 | orchestrator | Saturday 01 November 2025 14:01:17 +0000 (0:00:01.838) 0:00:15.298 ***** 2025-11-01 14:03:37.044563 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-01 14:03:37.044581 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.044592 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.044603 | orchestrator | skipping: [testbed-manager] 2025-11-01 14:03:37.044619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-01 14:03:37.044631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.044642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.044663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-01 14:03:37.044675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.044687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.044703 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:03:37.044714 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:03:37.044726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-01 14:03:37.044737 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.044753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.044765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-01 14:03:37.044782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.044793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.044804 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:03:37.044815 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:03:37.044827 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-01 14:03:37.044844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.044855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.044867 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:03:37.044878 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-01 14:03:37.044894 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.044906 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.044917 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:03:37.044928 | orchestrator | 2025-11-01 14:03:37.044939 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-11-01 14:03:37.044950 | orchestrator | Saturday 01 November 2025 14:01:21 +0000 (0:00:03.693) 0:00:18.992 ***** 2025-11-01 14:03:37.044961 | orchestrator | skipping: [testbed-manager] 2025-11-01 14:03:37.044972 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:03:37.044983 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:03:37.044994 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:03:37.045005 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:03:37.045021 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:03:37.045032 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:03:37.045042 | orchestrator | 2025-11-01 14:03:37.045053 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-11-01 14:03:37.045070 | orchestrator | Saturday 01 November 2025 14:01:22 +0000 (0:00:01.310) 0:00:20.302 ***** 2025-11-01 14:03:37.045081 | orchestrator | skipping: [testbed-manager] 2025-11-01 14:03:37.045091 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:03:37.045102 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:03:37.045112 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:03:37.045123 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:03:37.045134 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:03:37.045144 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:03:37.045155 | orchestrator | 2025-11-01 14:03:37.045166 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-11-01 14:03:37.045177 | orchestrator | Saturday 01 November 2025 14:01:23 +0000 (0:00:01.162) 0:00:21.465 ***** 2025-11-01 14:03:37.045188 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 14:03:37.045199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 14:03:37.045211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 14:03:37.045223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 14:03:37.045234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.045245 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 14:03:37.045279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.045291 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 14:03:37.045311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.045323 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.045335 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 14:03:37.045350 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.045362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.045380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.045398 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.045455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.045469 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.045481 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.045492 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.045509 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.045521 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.045539 | orchestrator | 2025-11-01 14:03:37.045550 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-11-01 14:03:37.045561 | orchestrator | Saturday 01 November 2025 14:01:33 +0000 (0:00:09.962) 0:00:31.427 ***** 2025-11-01 14:03:37.045572 | orchestrator | [WARNING]: Skipped 2025-11-01 14:03:37.045585 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-11-01 14:03:37.045596 | orchestrator | to this access issue: 2025-11-01 14:03:37.045607 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-11-01 14:03:37.045618 | orchestrator | directory 2025-11-01 14:03:37.045629 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-01 14:03:37.045640 | orchestrator | 2025-11-01 14:03:37.045650 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-11-01 14:03:37.045661 | orchestrator | Saturday 01 November 2025 14:01:35 +0000 (0:00:01.461) 0:00:32.889 ***** 2025-11-01 14:03:37.045672 | orchestrator | [WARNING]: Skipped 2025-11-01 14:03:37.045683 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-11-01 14:03:37.045700 | orchestrator | to this access issue: 2025-11-01 14:03:37.045711 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-11-01 14:03:37.045722 | orchestrator | directory 2025-11-01 14:03:37.045733 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-01 14:03:37.045744 | orchestrator | 2025-11-01 14:03:37.045754 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-11-01 14:03:37.045765 | orchestrator | Saturday 01 November 2025 14:01:36 +0000 (0:00:01.088) 0:00:33.977 ***** 2025-11-01 14:03:37.045776 | orchestrator | [WARNING]: Skipped 2025-11-01 14:03:37.045786 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-11-01 14:03:37.045797 | orchestrator | to this access issue: 2025-11-01 14:03:37.045808 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-11-01 14:03:37.045819 | orchestrator | directory 2025-11-01 14:03:37.045829 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-01 14:03:37.045840 | orchestrator | 2025-11-01 14:03:37.045851 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-11-01 14:03:37.045862 | orchestrator | Saturday 01 November 2025 14:01:37 +0000 (0:00:01.054) 0:00:35.032 ***** 2025-11-01 14:03:37.045872 | orchestrator | [WARNING]: Skipped 2025-11-01 14:03:37.045883 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-11-01 14:03:37.045894 | orchestrator | to this access issue: 2025-11-01 14:03:37.045904 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-11-01 14:03:37.045915 | orchestrator | directory 2025-11-01 14:03:37.045926 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-01 14:03:37.045937 | orchestrator | 2025-11-01 14:03:37.045947 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-11-01 14:03:37.045958 | orchestrator | Saturday 01 November 2025 14:01:38 +0000 (0:00:01.663) 0:00:36.695 ***** 2025-11-01 14:03:37.045969 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:03:37.045979 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:03:37.045990 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:03:37.046001 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:03:37.046011 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:03:37.046074 | orchestrator | changed: [testbed-manager] 2025-11-01 14:03:37.046085 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:03:37.046096 | orchestrator | 2025-11-01 14:03:37.046107 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-11-01 14:03:37.046118 | orchestrator | Saturday 01 November 2025 14:01:45 +0000 (0:00:06.863) 0:00:43.558 ***** 2025-11-01 14:03:37.046129 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-11-01 14:03:37.046147 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-11-01 14:03:37.046158 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-11-01 14:03:37.046169 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-11-01 14:03:37.046180 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-11-01 14:03:37.046191 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-11-01 14:03:37.046202 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-11-01 14:03:37.046212 | orchestrator | 2025-11-01 14:03:37.046223 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-11-01 14:03:37.046234 | orchestrator | Saturday 01 November 2025 14:01:51 +0000 (0:00:05.183) 0:00:48.742 ***** 2025-11-01 14:03:37.046245 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:03:37.046260 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:03:37.046271 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:03:37.046281 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:03:37.046292 | orchestrator | changed: [testbed-manager] 2025-11-01 14:03:37.046303 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:03:37.046313 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:03:37.046324 | orchestrator | 2025-11-01 14:03:37.046334 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-11-01 14:03:37.046345 | orchestrator | Saturday 01 November 2025 14:01:55 +0000 (0:00:04.327) 0:00:53.069 ***** 2025-11-01 14:03:37.046356 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 14:03:37.046375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.046387 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 14:03:37.046399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.046437 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.046450 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 14:03:37.046465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.046477 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.046489 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 14:03:37.046506 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.046518 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.046529 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 14:03:37.046546 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.046558 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 14:03:37.046574 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.046586 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.046597 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.046615 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.046627 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 14:03:37.046638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:03:37.046655 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.046666 | orchestrator | 2025-11-01 14:03:37.046677 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-11-01 14:03:37.046688 | orchestrator | Saturday 01 November 2025 14:01:59 +0000 (0:00:04.069) 0:00:57.139 ***** 2025-11-01 14:03:37.046699 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-11-01 14:03:37.046710 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-11-01 14:03:37.046721 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-11-01 14:03:37.046731 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-11-01 14:03:37.046742 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-11-01 14:03:37.046753 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-11-01 14:03:37.046764 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-11-01 14:03:37.046775 | orchestrator | 2025-11-01 14:03:37.046785 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-11-01 14:03:37.046796 | orchestrator | Saturday 01 November 2025 14:02:03 +0000 (0:00:03.926) 0:01:01.066 ***** 2025-11-01 14:03:37.046811 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-11-01 14:03:37.046822 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-11-01 14:03:37.046833 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-11-01 14:03:37.046844 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-11-01 14:03:37.046855 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-11-01 14:03:37.046866 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-11-01 14:03:37.046876 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-11-01 14:03:37.046887 | orchestrator | 2025-11-01 14:03:37.046898 | orchestrator | TASK [common : Check common containers] **************************************** 2025-11-01 14:03:37.046908 | orchestrator | Saturday 01 November 2025 14:02:06 +0000 (0:00:03.647) 0:01:04.713 ***** 2025-11-01 14:03:37.046919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 14:03:37.046936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 14:03:37.046954 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 14:03:37.046965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 14:03:37.046977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.046988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.047004 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 14:03:37.047016 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 14:03:37.047042 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 14:03:37.047063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.047075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.047087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.047098 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.047114 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.047126 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.047137 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.047162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.047175 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.047187 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.047198 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.047210 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:03:37.047221 | orchestrator | 2025-11-01 14:03:37.047232 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-11-01 14:03:37.047242 | orchestrator | Saturday 01 November 2025 14:02:11 +0000 (0:00:04.369) 0:01:09.082 ***** 2025-11-01 14:03:37.047253 | orchestrator | changed: [testbed-manager] 2025-11-01 14:03:37.047264 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:03:37.047275 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:03:37.047286 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:03:37.047296 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:03:37.047307 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:03:37.047318 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:03:37.047328 | orchestrator | 2025-11-01 14:03:37.047339 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-11-01 14:03:37.047350 | orchestrator | Saturday 01 November 2025 14:02:13 +0000 (0:00:01.855) 0:01:10.938 ***** 2025-11-01 14:03:37.047361 | orchestrator | changed: [testbed-manager] 2025-11-01 14:03:37.047372 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:03:37.047383 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:03:37.047393 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:03:37.047404 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:03:37.047433 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:03:37.047443 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:03:37.047454 | orchestrator | 2025-11-01 14:03:37.047465 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-11-01 14:03:37.047483 | orchestrator | Saturday 01 November 2025 14:02:14 +0000 (0:00:01.624) 0:01:12.563 ***** 2025-11-01 14:03:37.047494 | orchestrator | 2025-11-01 14:03:37.047505 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-11-01 14:03:37.047515 | orchestrator | Saturday 01 November 2025 14:02:14 +0000 (0:00:00.090) 0:01:12.653 ***** 2025-11-01 14:03:37.047526 | orchestrator | 2025-11-01 14:03:37.047542 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-11-01 14:03:37.047553 | orchestrator | Saturday 01 November 2025 14:02:14 +0000 (0:00:00.072) 0:01:12.726 ***** 2025-11-01 14:03:37.047564 | orchestrator | 2025-11-01 14:03:37.047575 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-11-01 14:03:37.047586 | orchestrator | Saturday 01 November 2025 14:02:15 +0000 (0:00:00.084) 0:01:12.811 ***** 2025-11-01 14:03:37.047596 | orchestrator | 2025-11-01 14:03:37.047607 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-11-01 14:03:37.047618 | orchestrator | Saturday 01 November 2025 14:02:15 +0000 (0:00:00.296) 0:01:13.107 ***** 2025-11-01 14:03:37.047629 | orchestrator | 2025-11-01 14:03:37.047639 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-11-01 14:03:37.047650 | orchestrator | Saturday 01 November 2025 14:02:15 +0000 (0:00:00.094) 0:01:13.202 ***** 2025-11-01 14:03:37.047661 | orchestrator | 2025-11-01 14:03:37.047671 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-11-01 14:03:37.047682 | orchestrator | Saturday 01 November 2025 14:02:15 +0000 (0:00:00.090) 0:01:13.292 ***** 2025-11-01 14:03:37.047693 | orchestrator | 2025-11-01 14:03:37.047703 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-11-01 14:03:37.047719 | orchestrator | Saturday 01 November 2025 14:02:15 +0000 (0:00:00.103) 0:01:13.396 ***** 2025-11-01 14:03:37.047730 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:03:37.047741 | orchestrator | changed: [testbed-manager] 2025-11-01 14:03:37.047752 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:03:37.047763 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:03:37.047773 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:03:37.047784 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:03:37.047795 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:03:37.047805 | orchestrator | 2025-11-01 14:03:37.047816 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-11-01 14:03:37.047827 | orchestrator | Saturday 01 November 2025 14:02:50 +0000 (0:00:34.566) 0:01:47.963 ***** 2025-11-01 14:03:37.047837 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:03:37.047848 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:03:37.047859 | orchestrator | changed: [testbed-manager] 2025-11-01 14:03:37.047869 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:03:37.047880 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:03:37.047890 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:03:37.047901 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:03:37.047912 | orchestrator | 2025-11-01 14:03:37.047922 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-11-01 14:03:37.047933 | orchestrator | Saturday 01 November 2025 14:03:21 +0000 (0:00:31.220) 0:02:19.184 ***** 2025-11-01 14:03:37.047944 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:03:37.047955 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:03:37.047966 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:03:37.047976 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:03:37.047987 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:03:37.047998 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:03:37.048008 | orchestrator | ok: [testbed-manager] 2025-11-01 14:03:37.048019 | orchestrator | 2025-11-01 14:03:37.048030 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-11-01 14:03:37.048041 | orchestrator | Saturday 01 November 2025 14:03:24 +0000 (0:00:02.602) 0:02:21.787 ***** 2025-11-01 14:03:37.048051 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:03:37.048069 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:03:37.048080 | orchestrator | changed: [testbed-manager] 2025-11-01 14:03:37.048090 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:03:37.048101 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:03:37.048112 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:03:37.048123 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:03:37.048133 | orchestrator | 2025-11-01 14:03:37.048144 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:03:37.048156 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-11-01 14:03:37.048167 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-11-01 14:03:37.048178 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-11-01 14:03:37.048189 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-11-01 14:03:37.048200 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-11-01 14:03:37.048211 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-11-01 14:03:37.048227 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-11-01 14:03:37.048238 | orchestrator | 2025-11-01 14:03:37.048249 | orchestrator | 2025-11-01 14:03:37.048260 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:03:37.048271 | orchestrator | Saturday 01 November 2025 14:03:34 +0000 (0:00:10.483) 0:02:32.271 ***** 2025-11-01 14:03:37.048281 | orchestrator | =============================================================================== 2025-11-01 14:03:37.048292 | orchestrator | common : Restart fluentd container ------------------------------------- 34.57s 2025-11-01 14:03:37.048303 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 31.22s 2025-11-01 14:03:37.048314 | orchestrator | common : Restart cron container ---------------------------------------- 10.48s 2025-11-01 14:03:37.048324 | orchestrator | common : Copying over config.json files for services -------------------- 9.96s 2025-11-01 14:03:37.048335 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 6.86s 2025-11-01 14:03:37.048346 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.24s 2025-11-01 14:03:37.048356 | orchestrator | common : Copying over cron logrotate config file ------------------------ 5.18s 2025-11-01 14:03:37.048367 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.78s 2025-11-01 14:03:37.048378 | orchestrator | common : Check common containers ---------------------------------------- 4.37s 2025-11-01 14:03:37.048388 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.33s 2025-11-01 14:03:37.048399 | orchestrator | common : Ensuring config directories have correct owner and permission --- 4.07s 2025-11-01 14:03:37.048425 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.93s 2025-11-01 14:03:37.048436 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.69s 2025-11-01 14:03:37.048447 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.65s 2025-11-01 14:03:37.048463 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.60s 2025-11-01 14:03:37.048474 | orchestrator | common : Creating log volume -------------------------------------------- 1.86s 2025-11-01 14:03:37.048485 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.83s 2025-11-01 14:03:37.048502 | orchestrator | common : Find custom fluentd output config files ------------------------ 1.66s 2025-11-01 14:03:37.048513 | orchestrator | common : include_tasks -------------------------------------------------- 1.65s 2025-11-01 14:03:37.048524 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.62s 2025-11-01 14:03:37.048535 | orchestrator | 2025-11-01 14:03:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:03:40.078390 | orchestrator | 2025-11-01 14:03:40 | INFO  | Task deda0de2-f2a5-4841-be63-21af5f11ea06 is in state STARTED 2025-11-01 14:03:40.079086 | orchestrator | 2025-11-01 14:03:40 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:03:40.080800 | orchestrator | 2025-11-01 14:03:40 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:03:40.081865 | orchestrator | 2025-11-01 14:03:40 | INFO  | Task 58121895-b68c-405d-a03a-578761b354a7 is in state STARTED 2025-11-01 14:03:40.082707 | orchestrator | 2025-11-01 14:03:40 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:03:40.083822 | orchestrator | 2025-11-01 14:03:40 | INFO  | Task 216bea31-7672-4a57-afea-307652db1862 is in state STARTED 2025-11-01 14:03:40.083842 | orchestrator | 2025-11-01 14:03:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:03:43.119313 | orchestrator | 2025-11-01 14:03:43 | INFO  | Task deda0de2-f2a5-4841-be63-21af5f11ea06 is in state STARTED 2025-11-01 14:03:43.119378 | orchestrator | 2025-11-01 14:03:43 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:03:43.119391 | orchestrator | 2025-11-01 14:03:43 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:03:43.119403 | orchestrator | 2025-11-01 14:03:43 | INFO  | Task 58121895-b68c-405d-a03a-578761b354a7 is in state STARTED 2025-11-01 14:03:43.119437 | orchestrator | 2025-11-01 14:03:43 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:03:43.121996 | orchestrator | 2025-11-01 14:03:43 | INFO  | Task 216bea31-7672-4a57-afea-307652db1862 is in state STARTED 2025-11-01 14:03:43.122057 | orchestrator | 2025-11-01 14:03:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:03:46.162221 | orchestrator | 2025-11-01 14:03:46 | INFO  | Task deda0de2-f2a5-4841-be63-21af5f11ea06 is in state STARTED 2025-11-01 14:03:46.162288 | orchestrator | 2025-11-01 14:03:46 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:03:46.163156 | orchestrator | 2025-11-01 14:03:46 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:03:46.164590 | orchestrator | 2025-11-01 14:03:46 | INFO  | Task 58121895-b68c-405d-a03a-578761b354a7 is in state STARTED 2025-11-01 14:03:46.166090 | orchestrator | 2025-11-01 14:03:46 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:03:46.167582 | orchestrator | 2025-11-01 14:03:46 | INFO  | Task 216bea31-7672-4a57-afea-307652db1862 is in state STARTED 2025-11-01 14:03:46.167603 | orchestrator | 2025-11-01 14:03:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:03:49.226692 | orchestrator | 2025-11-01 14:03:49 | INFO  | Task deda0de2-f2a5-4841-be63-21af5f11ea06 is in state STARTED 2025-11-01 14:03:49.227745 | orchestrator | 2025-11-01 14:03:49 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:03:49.228354 | orchestrator | 2025-11-01 14:03:49 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:03:49.229301 | orchestrator | 2025-11-01 14:03:49 | INFO  | Task 58121895-b68c-405d-a03a-578761b354a7 is in state STARTED 2025-11-01 14:03:49.230355 | orchestrator | 2025-11-01 14:03:49 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:03:49.237903 | orchestrator | 2025-11-01 14:03:49 | INFO  | Task 216bea31-7672-4a57-afea-307652db1862 is in state STARTED 2025-11-01 14:03:49.237933 | orchestrator | 2025-11-01 14:03:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:03:52.305550 | orchestrator | 2025-11-01 14:03:52 | INFO  | Task deda0de2-f2a5-4841-be63-21af5f11ea06 is in state STARTED 2025-11-01 14:03:52.306482 | orchestrator | 2025-11-01 14:03:52 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:03:52.307329 | orchestrator | 2025-11-01 14:03:52 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:03:52.308733 | orchestrator | 2025-11-01 14:03:52 | INFO  | Task 58121895-b68c-405d-a03a-578761b354a7 is in state STARTED 2025-11-01 14:03:52.309211 | orchestrator | 2025-11-01 14:03:52 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:03:52.310292 | orchestrator | 2025-11-01 14:03:52 | INFO  | Task 216bea31-7672-4a57-afea-307652db1862 is in state STARTED 2025-11-01 14:03:52.310314 | orchestrator | 2025-11-01 14:03:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:03:55.378532 | orchestrator | 2025-11-01 14:03:55 | INFO  | Task deda0de2-f2a5-4841-be63-21af5f11ea06 is in state STARTED 2025-11-01 14:03:55.378917 | orchestrator | 2025-11-01 14:03:55 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:03:55.380361 | orchestrator | 2025-11-01 14:03:55 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:03:55.382002 | orchestrator | 2025-11-01 14:03:55 | INFO  | Task 58121895-b68c-405d-a03a-578761b354a7 is in state STARTED 2025-11-01 14:03:55.386474 | orchestrator | 2025-11-01 14:03:55 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:03:55.392399 | orchestrator | 2025-11-01 14:03:55 | INFO  | Task 216bea31-7672-4a57-afea-307652db1862 is in state STARTED 2025-11-01 14:03:55.392411 | orchestrator | 2025-11-01 14:03:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:03:58.563744 | orchestrator | 2025-11-01 14:03:58 | INFO  | Task deda0de2-f2a5-4841-be63-21af5f11ea06 is in state SUCCESS 2025-11-01 14:03:58.563815 | orchestrator | 2025-11-01 14:03:58 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:03:58.563828 | orchestrator | 2025-11-01 14:03:58 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:03:58.563839 | orchestrator | 2025-11-01 14:03:58 | INFO  | Task 58121895-b68c-405d-a03a-578761b354a7 is in state STARTED 2025-11-01 14:03:58.563850 | orchestrator | 2025-11-01 14:03:58 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:03:58.563862 | orchestrator | 2025-11-01 14:03:58 | INFO  | Task 216bea31-7672-4a57-afea-307652db1862 is in state STARTED 2025-11-01 14:03:58.563874 | orchestrator | 2025-11-01 14:03:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:04:01.681145 | orchestrator | 2025-11-01 14:04:01 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:04:01.684967 | orchestrator | 2025-11-01 14:04:01 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:04:01.686355 | orchestrator | 2025-11-01 14:04:01 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:04:01.686486 | orchestrator | 2025-11-01 14:04:01 | INFO  | Task 58121895-b68c-405d-a03a-578761b354a7 is in state STARTED 2025-11-01 14:04:01.688176 | orchestrator | 2025-11-01 14:04:01 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:04:01.690687 | orchestrator | 2025-11-01 14:04:01 | INFO  | Task 216bea31-7672-4a57-afea-307652db1862 is in state STARTED 2025-11-01 14:04:01.691719 | orchestrator | 2025-11-01 14:04:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:04:04.756165 | orchestrator | 2025-11-01 14:04:04 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:04:04.759906 | orchestrator | 2025-11-01 14:04:04 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:04:04.761936 | orchestrator | 2025-11-01 14:04:04 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:04:04.767064 | orchestrator | 2025-11-01 14:04:04 | INFO  | Task 58121895-b68c-405d-a03a-578761b354a7 is in state STARTED 2025-11-01 14:04:04.768073 | orchestrator | 2025-11-01 14:04:04 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:04:04.771658 | orchestrator | 2025-11-01 14:04:04 | INFO  | Task 216bea31-7672-4a57-afea-307652db1862 is in state STARTED 2025-11-01 14:04:04.771684 | orchestrator | 2025-11-01 14:04:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:04:07.841867 | orchestrator | 2025-11-01 14:04:07 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:04:07.841961 | orchestrator | 2025-11-01 14:04:07 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:04:07.841978 | orchestrator | 2025-11-01 14:04:07 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:04:07.843155 | orchestrator | 2025-11-01 14:04:07 | INFO  | Task 58121895-b68c-405d-a03a-578761b354a7 is in state SUCCESS 2025-11-01 14:04:07.844773 | orchestrator | 2025-11-01 14:04:07.844819 | orchestrator | 2025-11-01 14:04:07.844833 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 14:04:07.844845 | orchestrator | 2025-11-01 14:04:07.844857 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 14:04:07.844869 | orchestrator | Saturday 01 November 2025 14:03:40 +0000 (0:00:00.368) 0:00:00.368 ***** 2025-11-01 14:04:07.844880 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:04:07.844892 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:04:07.844903 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:04:07.844914 | orchestrator | 2025-11-01 14:04:07.844925 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 14:04:07.844936 | orchestrator | Saturday 01 November 2025 14:03:40 +0000 (0:00:00.371) 0:00:00.739 ***** 2025-11-01 14:04:07.844948 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-11-01 14:04:07.844959 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-11-01 14:04:07.844970 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-11-01 14:04:07.844981 | orchestrator | 2025-11-01 14:04:07.844992 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-11-01 14:04:07.845003 | orchestrator | 2025-11-01 14:04:07.845014 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-11-01 14:04:07.845025 | orchestrator | Saturday 01 November 2025 14:03:41 +0000 (0:00:00.795) 0:00:01.534 ***** 2025-11-01 14:04:07.845036 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:04:07.845047 | orchestrator | 2025-11-01 14:04:07.845058 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-11-01 14:04:07.845069 | orchestrator | Saturday 01 November 2025 14:03:42 +0000 (0:00:00.765) 0:00:02.300 ***** 2025-11-01 14:04:07.845080 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-11-01 14:04:07.845091 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-11-01 14:04:07.845207 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-11-01 14:04:07.845223 | orchestrator | 2025-11-01 14:04:07.845234 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-11-01 14:04:07.845245 | orchestrator | Saturday 01 November 2025 14:03:43 +0000 (0:00:01.492) 0:00:03.793 ***** 2025-11-01 14:04:07.845255 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-11-01 14:04:07.845266 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-11-01 14:04:07.845278 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-11-01 14:04:07.845289 | orchestrator | 2025-11-01 14:04:07.845301 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-11-01 14:04:07.845312 | orchestrator | Saturday 01 November 2025 14:03:46 +0000 (0:00:03.061) 0:00:06.854 ***** 2025-11-01 14:04:07.845322 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:04:07.845333 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:04:07.845344 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:04:07.845355 | orchestrator | 2025-11-01 14:04:07.845365 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-11-01 14:04:07.845376 | orchestrator | Saturday 01 November 2025 14:03:49 +0000 (0:00:02.424) 0:00:09.278 ***** 2025-11-01 14:04:07.845387 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:04:07.845398 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:04:07.845408 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:04:07.845444 | orchestrator | 2025-11-01 14:04:07.845455 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:04:07.845474 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:04:07.845486 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:04:07.845497 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:04:07.845508 | orchestrator | 2025-11-01 14:04:07.845519 | orchestrator | 2025-11-01 14:04:07.845530 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:04:07.845540 | orchestrator | Saturday 01 November 2025 14:03:57 +0000 (0:00:08.239) 0:00:17.518 ***** 2025-11-01 14:04:07.845551 | orchestrator | =============================================================================== 2025-11-01 14:04:07.845562 | orchestrator | memcached : Restart memcached container --------------------------------- 8.24s 2025-11-01 14:04:07.845573 | orchestrator | memcached : Copying over config.json files for services ----------------- 3.06s 2025-11-01 14:04:07.845583 | orchestrator | memcached : Check memcached container ----------------------------------- 2.42s 2025-11-01 14:04:07.845594 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.49s 2025-11-01 14:04:07.845605 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.80s 2025-11-01 14:04:07.845615 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.77s 2025-11-01 14:04:07.845626 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2025-11-01 14:04:07.845637 | orchestrator | 2025-11-01 14:04:07.845648 | orchestrator | 2025-11-01 14:04:07.845659 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 14:04:07.845670 | orchestrator | 2025-11-01 14:04:07.845680 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 14:04:07.845691 | orchestrator | Saturday 01 November 2025 14:03:40 +0000 (0:00:00.338) 0:00:00.338 ***** 2025-11-01 14:04:07.845702 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:04:07.845713 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:04:07.845723 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:04:07.845734 | orchestrator | 2025-11-01 14:04:07.845745 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 14:04:07.845777 | orchestrator | Saturday 01 November 2025 14:03:40 +0000 (0:00:00.385) 0:00:00.723 ***** 2025-11-01 14:04:07.845789 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-11-01 14:04:07.845800 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-11-01 14:04:07.845811 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-11-01 14:04:07.845821 | orchestrator | 2025-11-01 14:04:07.845832 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-11-01 14:04:07.845845 | orchestrator | 2025-11-01 14:04:07.845858 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-11-01 14:04:07.845870 | orchestrator | Saturday 01 November 2025 14:03:41 +0000 (0:00:00.841) 0:00:01.564 ***** 2025-11-01 14:04:07.845883 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:04:07.845896 | orchestrator | 2025-11-01 14:04:07.845909 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-11-01 14:04:07.845922 | orchestrator | Saturday 01 November 2025 14:03:42 +0000 (0:00:00.791) 0:00:02.356 ***** 2025-11-01 14:04:07.845937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-01 14:04:07.845956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-01 14:04:07.845971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-01 14:04:07.845989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-01 14:04:07.846004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-01 14:04:07.846097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-01 14:04:07.846126 | orchestrator | 2025-11-01 14:04:07.846139 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-11-01 14:04:07.846152 | orchestrator | Saturday 01 November 2025 14:03:44 +0000 (0:00:01.931) 0:00:04.287 ***** 2025-11-01 14:04:07.846166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-01 14:04:07.846179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-01 14:04:07.846193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-01 14:04:07.846211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-01 14:04:07.846223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-01 14:04:07.846249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-01 14:04:07.846261 | orchestrator | 2025-11-01 14:04:07.846272 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-11-01 14:04:07.846283 | orchestrator | Saturday 01 November 2025 14:03:48 +0000 (0:00:04.162) 0:00:08.450 ***** 2025-11-01 14:04:07.846294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-01 14:04:07.846306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-01 14:04:07.846317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-01 14:04:07.846333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-01 14:04:07.846345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-01 14:04:07.846363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-01 14:04:07.846375 | orchestrator | 2025-11-01 14:04:07.846391 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-11-01 14:04:07.846403 | orchestrator | Saturday 01 November 2025 14:03:51 +0000 (0:00:03.517) 0:00:11.968 ***** 2025-11-01 14:04:07.846449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-01 14:04:07.846463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-01 14:04:07.846475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-01 14:04:07.846486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-01 14:04:07.846503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-01 14:04:07.846521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-01 14:04:07.846533 | orchestrator | 2025-11-01 14:04:07.846544 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-11-01 14:04:07.846555 | orchestrator | Saturday 01 November 2025 14:03:54 +0000 (0:00:02.351) 0:00:14.319 ***** 2025-11-01 14:04:07.846566 | orchestrator | 2025-11-01 14:04:07.846577 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-11-01 14:04:07.846594 | orchestrator | Saturday 01 November 2025 14:03:54 +0000 (0:00:00.096) 0:00:14.415 ***** 2025-11-01 14:04:07.846605 | orchestrator | 2025-11-01 14:04:07.846616 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-11-01 14:04:07.846627 | orchestrator | Saturday 01 November 2025 14:03:54 +0000 (0:00:00.087) 0:00:14.503 ***** 2025-11-01 14:04:07.846637 | orchestrator | 2025-11-01 14:04:07.846648 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-11-01 14:04:07.846659 | orchestrator | Saturday 01 November 2025 14:03:54 +0000 (0:00:00.078) 0:00:14.581 ***** 2025-11-01 14:04:07.846670 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:04:07.846681 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:04:07.846691 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:04:07.846702 | orchestrator | 2025-11-01 14:04:07.846713 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-11-01 14:04:07.846724 | orchestrator | Saturday 01 November 2025 14:03:58 +0000 (0:00:04.157) 0:00:18.739 ***** 2025-11-01 14:04:07.846734 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:04:07.846745 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:04:07.846756 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:04:07.846766 | orchestrator | 2025-11-01 14:04:07.846777 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:04:07.846788 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:04:07.846799 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:04:07.846810 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:04:07.846821 | orchestrator | 2025-11-01 14:04:07.846832 | orchestrator | 2025-11-01 14:04:07.846842 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:04:07.846853 | orchestrator | Saturday 01 November 2025 14:04:04 +0000 (0:00:06.209) 0:00:24.949 ***** 2025-11-01 14:04:07.846864 | orchestrator | =============================================================================== 2025-11-01 14:04:07.846875 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 6.21s 2025-11-01 14:04:07.846886 | orchestrator | redis : Copying over default config.json files -------------------------- 4.16s 2025-11-01 14:04:07.846896 | orchestrator | redis : Restart redis container ----------------------------------------- 4.16s 2025-11-01 14:04:07.846914 | orchestrator | redis : Copying over redis config files --------------------------------- 3.52s 2025-11-01 14:04:07.846925 | orchestrator | redis : Check redis containers ------------------------------------------ 2.35s 2025-11-01 14:04:07.846935 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.93s 2025-11-01 14:04:07.846946 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.84s 2025-11-01 14:04:07.846957 | orchestrator | redis : include_tasks --------------------------------------------------- 0.79s 2025-11-01 14:04:07.846968 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.39s 2025-11-01 14:04:07.846978 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.26s 2025-11-01 14:04:07.846989 | orchestrator | 2025-11-01 14:04:07 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:04:07.847000 | orchestrator | 2025-11-01 14:04:07 | INFO  | Task 216bea31-7672-4a57-afea-307652db1862 is in state STARTED 2025-11-01 14:04:07.847012 | orchestrator | 2025-11-01 14:04:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:04:11.069127 | orchestrator | 2025-11-01 14:04:11 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:04:11.072699 | orchestrator | 2025-11-01 14:04:11 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:04:11.078605 | orchestrator | 2025-11-01 14:04:11 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:04:11.082603 | orchestrator | 2025-11-01 14:04:11 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:04:11.087979 | orchestrator | 2025-11-01 14:04:11 | INFO  | Task 216bea31-7672-4a57-afea-307652db1862 is in state STARTED 2025-11-01 14:04:11.088007 | orchestrator | 2025-11-01 14:04:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:04:14.127834 | orchestrator | 2025-11-01 14:04:14 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:04:14.131132 | orchestrator | 2025-11-01 14:04:14 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:04:14.131169 | orchestrator | 2025-11-01 14:04:14 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:04:14.132776 | orchestrator | 2025-11-01 14:04:14 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:04:14.134287 | orchestrator | 2025-11-01 14:04:14 | INFO  | Task 216bea31-7672-4a57-afea-307652db1862 is in state STARTED 2025-11-01 14:04:14.134313 | orchestrator | 2025-11-01 14:04:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:04:17.168191 | orchestrator | 2025-11-01 14:04:17 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:04:17.168457 | orchestrator | 2025-11-01 14:04:17 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:04:17.170470 | orchestrator | 2025-11-01 14:04:17 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:04:17.174082 | orchestrator | 2025-11-01 14:04:17 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:04:17.174114 | orchestrator | 2025-11-01 14:04:17 | INFO  | Task 216bea31-7672-4a57-afea-307652db1862 is in state STARTED 2025-11-01 14:04:17.174127 | orchestrator | 2025-11-01 14:04:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:04:20.229292 | orchestrator | 2025-11-01 14:04:20 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:04:20.232947 | orchestrator | 2025-11-01 14:04:20 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:04:20.233789 | orchestrator | 2025-11-01 14:04:20 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:04:20.235801 | orchestrator | 2025-11-01 14:04:20 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:04:20.236777 | orchestrator | 2025-11-01 14:04:20 | INFO  | Task 216bea31-7672-4a57-afea-307652db1862 is in state STARTED 2025-11-01 14:04:20.236803 | orchestrator | 2025-11-01 14:04:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:04:23.300336 | orchestrator | 2025-11-01 14:04:23 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:04:23.301641 | orchestrator | 2025-11-01 14:04:23 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:04:23.305228 | orchestrator | 2025-11-01 14:04:23 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:04:23.306106 | orchestrator | 2025-11-01 14:04:23 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:04:23.309282 | orchestrator | 2025-11-01 14:04:23 | INFO  | Task 216bea31-7672-4a57-afea-307652db1862 is in state STARTED 2025-11-01 14:04:23.309306 | orchestrator | 2025-11-01 14:04:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:04:26.353137 | orchestrator | 2025-11-01 14:04:26 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:04:26.355799 | orchestrator | 2025-11-01 14:04:26 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:04:26.358775 | orchestrator | 2025-11-01 14:04:26 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:04:26.359724 | orchestrator | 2025-11-01 14:04:26 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:04:26.360678 | orchestrator | 2025-11-01 14:04:26 | INFO  | Task 216bea31-7672-4a57-afea-307652db1862 is in state STARTED 2025-11-01 14:04:26.360705 | orchestrator | 2025-11-01 14:04:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:04:29.411542 | orchestrator | 2025-11-01 14:04:29 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:04:29.411641 | orchestrator | 2025-11-01 14:04:29 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:04:29.411655 | orchestrator | 2025-11-01 14:04:29 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:04:29.411666 | orchestrator | 2025-11-01 14:04:29 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:04:29.411677 | orchestrator | 2025-11-01 14:04:29 | INFO  | Task 216bea31-7672-4a57-afea-307652db1862 is in state STARTED 2025-11-01 14:04:29.411689 | orchestrator | 2025-11-01 14:04:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:04:32.508698 | orchestrator | 2025-11-01 14:04:32 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:04:32.509555 | orchestrator | 2025-11-01 14:04:32 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:04:32.510844 | orchestrator | 2025-11-01 14:04:32 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:04:32.511622 | orchestrator | 2025-11-01 14:04:32 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:04:32.512662 | orchestrator | 2025-11-01 14:04:32 | INFO  | Task 216bea31-7672-4a57-afea-307652db1862 is in state STARTED 2025-11-01 14:04:32.512833 | orchestrator | 2025-11-01 14:04:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:04:35.849631 | orchestrator | 2025-11-01 14:04:35 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:04:35.851292 | orchestrator | 2025-11-01 14:04:35 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:04:35.853631 | orchestrator | 2025-11-01 14:04:35 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:04:35.855190 | orchestrator | 2025-11-01 14:04:35 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:04:35.857214 | orchestrator | 2025-11-01 14:04:35 | INFO  | Task 216bea31-7672-4a57-afea-307652db1862 is in state STARTED 2025-11-01 14:04:35.857487 | orchestrator | 2025-11-01 14:04:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:04:38.900701 | orchestrator | 2025-11-01 14:04:38 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:04:38.900802 | orchestrator | 2025-11-01 14:04:38 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:04:38.900983 | orchestrator | 2025-11-01 14:04:38 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:04:38.901951 | orchestrator | 2025-11-01 14:04:38 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:04:38.903126 | orchestrator | 2025-11-01 14:04:38 | INFO  | Task 216bea31-7672-4a57-afea-307652db1862 is in state STARTED 2025-11-01 14:04:38.903152 | orchestrator | 2025-11-01 14:04:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:04:42.091417 | orchestrator | 2025-11-01 14:04:42 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:04:42.091557 | orchestrator | 2025-11-01 14:04:42 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:04:42.091570 | orchestrator | 2025-11-01 14:04:42 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:04:42.091582 | orchestrator | 2025-11-01 14:04:42 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:04:42.091593 | orchestrator | 2025-11-01 14:04:42 | INFO  | Task 216bea31-7672-4a57-afea-307652db1862 is in state STARTED 2025-11-01 14:04:42.091604 | orchestrator | 2025-11-01 14:04:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:04:45.135868 | orchestrator | 2025-11-01 14:04:45 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:04:45.136317 | orchestrator | 2025-11-01 14:04:45 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:04:45.139645 | orchestrator | 2025-11-01 14:04:45 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:04:45.140321 | orchestrator | 2025-11-01 14:04:45 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:04:45.144205 | orchestrator | 2025-11-01 14:04:45 | INFO  | Task 216bea31-7672-4a57-afea-307652db1862 is in state STARTED 2025-11-01 14:04:45.144333 | orchestrator | 2025-11-01 14:04:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:04:48.184950 | orchestrator | 2025-11-01 14:04:48 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:04:48.185594 | orchestrator | 2025-11-01 14:04:48 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:04:48.187487 | orchestrator | 2025-11-01 14:04:48 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:04:48.188324 | orchestrator | 2025-11-01 14:04:48 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:04:48.190369 | orchestrator | 2025-11-01 14:04:48 | INFO  | Task 216bea31-7672-4a57-afea-307652db1862 is in state STARTED 2025-11-01 14:04:48.190547 | orchestrator | 2025-11-01 14:04:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:04:51.236095 | orchestrator | 2025-11-01 14:04:51 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:04:51.236282 | orchestrator | 2025-11-01 14:04:51 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:04:51.238532 | orchestrator | 2025-11-01 14:04:51 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:04:51.239249 | orchestrator | 2025-11-01 14:04:51 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:04:51.240200 | orchestrator | 2025-11-01 14:04:51 | INFO  | Task 216bea31-7672-4a57-afea-307652db1862 is in state STARTED 2025-11-01 14:04:51.240220 | orchestrator | 2025-11-01 14:04:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:04:54.289031 | orchestrator | 2025-11-01 14:04:54 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:04:54.290001 | orchestrator | 2025-11-01 14:04:54 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:04:54.291474 | orchestrator | 2025-11-01 14:04:54 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:04:54.292768 | orchestrator | 2025-11-01 14:04:54 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:04:54.294471 | orchestrator | 2025-11-01 14:04:54 | INFO  | Task 216bea31-7672-4a57-afea-307652db1862 is in state STARTED 2025-11-01 14:04:54.294824 | orchestrator | 2025-11-01 14:04:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:04:57.338723 | orchestrator | 2025-11-01 14:04:57 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:04:57.344226 | orchestrator | 2025-11-01 14:04:57 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:04:57.347751 | orchestrator | 2025-11-01 14:04:57 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:04:57.349302 | orchestrator | 2025-11-01 14:04:57 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:04:57.350955 | orchestrator | 2025-11-01 14:04:57 | INFO  | Task 216bea31-7672-4a57-afea-307652db1862 is in state STARTED 2025-11-01 14:04:57.350981 | orchestrator | 2025-11-01 14:04:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:05:00.385673 | orchestrator | 2025-11-01 14:05:00 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:05:00.386512 | orchestrator | 2025-11-01 14:05:00 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:05:00.388392 | orchestrator | 2025-11-01 14:05:00 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:05:00.390856 | orchestrator | 2025-11-01 14:05:00 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:05:00.397198 | orchestrator | 2025-11-01 14:05:00 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:05:00.397222 | orchestrator | 2025-11-01 14:05:00 | INFO  | Task 216bea31-7672-4a57-afea-307652db1862 is in state SUCCESS 2025-11-01 14:05:00.397235 | orchestrator | 2025-11-01 14:05:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:05:00.398514 | orchestrator | 2025-11-01 14:05:00.398550 | orchestrator | 2025-11-01 14:05:00.398562 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 14:05:00.398574 | orchestrator | 2025-11-01 14:05:00.398626 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 14:05:00.398639 | orchestrator | Saturday 01 November 2025 14:03:40 +0000 (0:00:00.336) 0:00:00.336 ***** 2025-11-01 14:05:00.398650 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:05:00.398663 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:05:00.398673 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:05:00.398684 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:05:00.398694 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:05:00.398705 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:05:00.398716 | orchestrator | 2025-11-01 14:05:00.398727 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 14:05:00.398737 | orchestrator | Saturday 01 November 2025 14:03:41 +0000 (0:00:01.000) 0:00:01.336 ***** 2025-11-01 14:05:00.398748 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-11-01 14:05:00.398759 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-11-01 14:05:00.398792 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-11-01 14:05:00.398804 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-11-01 14:05:00.398815 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-11-01 14:05:00.398826 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-11-01 14:05:00.398837 | orchestrator | 2025-11-01 14:05:00.398848 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-11-01 14:05:00.398858 | orchestrator | 2025-11-01 14:05:00.398869 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-11-01 14:05:00.398880 | orchestrator | Saturday 01 November 2025 14:03:42 +0000 (0:00:01.218) 0:00:02.554 ***** 2025-11-01 14:05:00.398891 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:05:00.398904 | orchestrator | 2025-11-01 14:05:00.398915 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-11-01 14:05:00.398926 | orchestrator | Saturday 01 November 2025 14:03:45 +0000 (0:00:02.802) 0:00:05.357 ***** 2025-11-01 14:05:00.398936 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-11-01 14:05:00.398947 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-11-01 14:05:00.398958 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-11-01 14:05:00.398969 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-11-01 14:05:00.398979 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-11-01 14:05:00.398990 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-11-01 14:05:00.399000 | orchestrator | 2025-11-01 14:05:00.399011 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-11-01 14:05:00.399022 | orchestrator | Saturday 01 November 2025 14:03:47 +0000 (0:00:02.190) 0:00:07.547 ***** 2025-11-01 14:05:00.399033 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-11-01 14:05:00.399044 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-11-01 14:05:00.399054 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-11-01 14:05:00.399065 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-11-01 14:05:00.399076 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-11-01 14:05:00.399086 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-11-01 14:05:00.399097 | orchestrator | 2025-11-01 14:05:00.399108 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-11-01 14:05:00.399119 | orchestrator | Saturday 01 November 2025 14:03:49 +0000 (0:00:01.974) 0:00:09.521 ***** 2025-11-01 14:05:00.399129 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-11-01 14:05:00.399140 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:05:00.399162 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-11-01 14:05:00.399173 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:05:00.399184 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-11-01 14:05:00.399194 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:05:00.399205 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-11-01 14:05:00.399215 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:05:00.399226 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-11-01 14:05:00.399237 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:05:00.399248 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-11-01 14:05:00.399258 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:05:00.399269 | orchestrator | 2025-11-01 14:05:00.399279 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-11-01 14:05:00.399290 | orchestrator | Saturday 01 November 2025 14:03:51 +0000 (0:00:02.179) 0:00:11.701 ***** 2025-11-01 14:05:00.399301 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:05:00.399311 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:05:00.399322 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:05:00.399333 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:05:00.399344 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:05:00.399354 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:05:00.399365 | orchestrator | 2025-11-01 14:05:00.399375 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-11-01 14:05:00.399386 | orchestrator | Saturday 01 November 2025 14:03:52 +0000 (0:00:01.155) 0:00:12.856 ***** 2025-11-01 14:05:00.399421 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 14:05:00.399463 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 14:05:00.399475 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 14:05:00.399487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 14:05:00.399506 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 14:05:00.399518 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 14:05:00.399543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 14:05:00.399556 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 14:05:00.399567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 14:05:00.399585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 14:05:00.399597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 14:05:00.399614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 14:05:00.399626 | orchestrator | 2025-11-01 14:05:00.399637 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-11-01 14:05:00.399653 | orchestrator | Saturday 01 November 2025 14:03:55 +0000 (0:00:02.803) 0:00:15.660 ***** 2025-11-01 14:05:00.399665 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 14:05:00.399677 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 14:05:00.399688 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 14:05:00.399706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 14:05:00.399718 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 14:05:00.399742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 14:05:00.399754 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 14:05:00.399765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 14:05:00.399784 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 14:05:00.399795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 14:05:00.399807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 14:05:00.399831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 14:05:00.399843 | orchestrator | 2025-11-01 14:05:00.399854 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-11-01 14:05:00.399865 | orchestrator | Saturday 01 November 2025 14:04:00 +0000 (0:00:05.092) 0:00:20.752 ***** 2025-11-01 14:05:00.399876 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:05:00.399887 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:05:00.399898 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:05:00.399908 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:05:00.399919 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:05:00.399930 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:05:00.399940 | orchestrator | 2025-11-01 14:05:00.399951 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-11-01 14:05:00.399962 | orchestrator | Saturday 01 November 2025 14:04:03 +0000 (0:00:02.430) 0:00:23.183 ***** 2025-11-01 14:05:00.399973 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 14:05:00.399990 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 14:05:00.400002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 14:05:00.400014 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 14:05:00.400037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 14:05:00.400049 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 14:05:00.400073 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 14:05:00.400085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 14:05:00.400096 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 14:05:00.400108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 14:05:00.400130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 14:05:00.400142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 14:05:00.400161 | orchestrator | 2025-11-01 14:05:00.400173 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-11-01 14:05:00.400183 | orchestrator | Saturday 01 November 2025 14:04:06 +0000 (0:00:03.666) 0:00:26.849 ***** 2025-11-01 14:05:00.400194 | orchestrator | 2025-11-01 14:05:00.400205 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-11-01 14:05:00.400216 | orchestrator | Saturday 01 November 2025 14:04:06 +0000 (0:00:00.247) 0:00:27.097 ***** 2025-11-01 14:05:00.400226 | orchestrator | 2025-11-01 14:05:00.400237 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-11-01 14:05:00.400248 | orchestrator | Saturday 01 November 2025 14:04:07 +0000 (0:00:00.339) 0:00:27.436 ***** 2025-11-01 14:05:00.400259 | orchestrator | 2025-11-01 14:05:00.400269 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-11-01 14:05:00.400280 | orchestrator | Saturday 01 November 2025 14:04:07 +0000 (0:00:00.246) 0:00:27.683 ***** 2025-11-01 14:05:00.400291 | orchestrator | 2025-11-01 14:05:00.400302 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-11-01 14:05:00.400312 | orchestrator | Saturday 01 November 2025 14:04:07 +0000 (0:00:00.224) 0:00:27.908 ***** 2025-11-01 14:05:00.400323 | orchestrator | 2025-11-01 14:05:00.400334 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-11-01 14:05:00.400345 | orchestrator | Saturday 01 November 2025 14:04:07 +0000 (0:00:00.172) 0:00:28.082 ***** 2025-11-01 14:05:00.400355 | orchestrator | 2025-11-01 14:05:00.400366 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-11-01 14:05:00.400377 | orchestrator | Saturday 01 November 2025 14:04:08 +0000 (0:00:00.261) 0:00:28.343 ***** 2025-11-01 14:05:00.400388 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:05:00.400398 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:05:00.400409 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:05:00.400420 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:05:00.400461 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:05:00.400473 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:05:00.400483 | orchestrator | 2025-11-01 14:05:00.400494 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-11-01 14:05:00.400505 | orchestrator | Saturday 01 November 2025 14:04:18 +0000 (0:00:10.028) 0:00:38.372 ***** 2025-11-01 14:05:00.400516 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:05:00.400526 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:05:00.400537 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:05:00.400548 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:05:00.400558 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:05:00.400569 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:05:00.400579 | orchestrator | 2025-11-01 14:05:00.400590 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-11-01 14:05:00.400601 | orchestrator | Saturday 01 November 2025 14:04:20 +0000 (0:00:01.781) 0:00:40.154 ***** 2025-11-01 14:05:00.400612 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:05:00.400623 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:05:00.400633 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:05:00.400644 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:05:00.400655 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:05:00.400665 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:05:00.400675 | orchestrator | 2025-11-01 14:05:00.400686 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-11-01 14:05:00.400697 | orchestrator | Saturday 01 November 2025 14:04:31 +0000 (0:00:11.140) 0:00:51.295 ***** 2025-11-01 14:05:00.400716 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-11-01 14:05:00.400727 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-11-01 14:05:00.400738 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-11-01 14:05:00.400749 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-11-01 14:05:00.400760 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-11-01 14:05:00.400776 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-11-01 14:05:00.400788 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-11-01 14:05:00.400799 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-11-01 14:05:00.400809 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-11-01 14:05:00.400820 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-11-01 14:05:00.400831 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-11-01 14:05:00.400842 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-11-01 14:05:00.400852 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-11-01 14:05:00.400863 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-11-01 14:05:00.400874 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-11-01 14:05:00.400892 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-11-01 14:05:00.400904 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-11-01 14:05:00.400914 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-11-01 14:05:00.400925 | orchestrator | 2025-11-01 14:05:00.400936 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-11-01 14:05:00.400947 | orchestrator | Saturday 01 November 2025 14:04:39 +0000 (0:00:08.509) 0:00:59.804 ***** 2025-11-01 14:05:00.400957 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-11-01 14:05:00.400968 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:05:00.400979 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-11-01 14:05:00.400990 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:05:00.401000 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-11-01 14:05:00.401011 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:05:00.401021 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-11-01 14:05:00.401032 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-11-01 14:05:00.401042 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-11-01 14:05:00.401053 | orchestrator | 2025-11-01 14:05:00.401064 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-11-01 14:05:00.401075 | orchestrator | Saturday 01 November 2025 14:04:43 +0000 (0:00:03.654) 0:01:03.458 ***** 2025-11-01 14:05:00.401085 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-11-01 14:05:00.401096 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:05:00.401114 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-11-01 14:05:00.401125 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:05:00.401135 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-11-01 14:05:00.401146 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:05:00.401157 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-11-01 14:05:00.401167 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-11-01 14:05:00.401178 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-11-01 14:05:00.401189 | orchestrator | 2025-11-01 14:05:00.401199 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-11-01 14:05:00.401210 | orchestrator | Saturday 01 November 2025 14:04:48 +0000 (0:00:05.168) 0:01:08.626 ***** 2025-11-01 14:05:00.401220 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:05:00.401231 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:05:00.401242 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:05:00.401252 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:05:00.401263 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:05:00.401274 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:05:00.401284 | orchestrator | 2025-11-01 14:05:00.401295 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:05:00.401306 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-01 14:05:00.401317 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-01 14:05:00.401328 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-01 14:05:00.401339 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-01 14:05:00.401350 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-01 14:05:00.401371 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-01 14:05:00.401383 | orchestrator | 2025-11-01 14:05:00.401394 | orchestrator | 2025-11-01 14:05:00.401405 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:05:00.401416 | orchestrator | Saturday 01 November 2025 14:04:57 +0000 (0:00:08.950) 0:01:17.577 ***** 2025-11-01 14:05:00.401476 | orchestrator | =============================================================================== 2025-11-01 14:05:00.401489 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 20.09s 2025-11-01 14:05:00.401499 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.03s 2025-11-01 14:05:00.401510 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.51s 2025-11-01 14:05:00.401521 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 5.17s 2025-11-01 14:05:00.401531 | orchestrator | openvswitch : Copying over config.json files for services --------------- 5.09s 2025-11-01 14:05:00.401542 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.67s 2025-11-01 14:05:00.401553 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.65s 2025-11-01 14:05:00.401563 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.80s 2025-11-01 14:05:00.401574 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.80s 2025-11-01 14:05:00.401584 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.43s 2025-11-01 14:05:00.401595 | orchestrator | module-load : Load modules ---------------------------------------------- 2.19s 2025-11-01 14:05:00.401619 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.18s 2025-11-01 14:05:00.401630 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.97s 2025-11-01 14:05:00.401641 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.78s 2025-11-01 14:05:00.401651 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.49s 2025-11-01 14:05:00.401662 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.22s 2025-11-01 14:05:00.401673 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.16s 2025-11-01 14:05:00.401683 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.00s 2025-11-01 14:05:03.458305 | orchestrator | 2025-11-01 14:05:03 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:05:03.459748 | orchestrator | 2025-11-01 14:05:03 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:05:03.460950 | orchestrator | 2025-11-01 14:05:03 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:05:03.461976 | orchestrator | 2025-11-01 14:05:03 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:05:03.462924 | orchestrator | 2025-11-01 14:05:03 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:05:03.463147 | orchestrator | 2025-11-01 14:05:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:05:06.512773 | orchestrator | 2025-11-01 14:05:06 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:05:06.517699 | orchestrator | 2025-11-01 14:05:06 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:05:06.519676 | orchestrator | 2025-11-01 14:05:06 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:05:06.521159 | orchestrator | 2025-11-01 14:05:06 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:05:06.522226 | orchestrator | 2025-11-01 14:05:06 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:05:06.522247 | orchestrator | 2025-11-01 14:05:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:05:09.564337 | orchestrator | 2025-11-01 14:05:09 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:05:09.572283 | orchestrator | 2025-11-01 14:05:09 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:05:09.573408 | orchestrator | 2025-11-01 14:05:09 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:05:09.574565 | orchestrator | 2025-11-01 14:05:09 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:05:09.581362 | orchestrator | 2025-11-01 14:05:09 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:05:09.581382 | orchestrator | 2025-11-01 14:05:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:05:12.623415 | orchestrator | 2025-11-01 14:05:12 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:05:12.624249 | orchestrator | 2025-11-01 14:05:12 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:05:12.625327 | orchestrator | 2025-11-01 14:05:12 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:05:12.626271 | orchestrator | 2025-11-01 14:05:12 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:05:12.627320 | orchestrator | 2025-11-01 14:05:12 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:05:12.627361 | orchestrator | 2025-11-01 14:05:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:05:15.698613 | orchestrator | 2025-11-01 14:05:15 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:05:15.698667 | orchestrator | 2025-11-01 14:05:15 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:05:15.698678 | orchestrator | 2025-11-01 14:05:15 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:05:15.698689 | orchestrator | 2025-11-01 14:05:15 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:05:15.698699 | orchestrator | 2025-11-01 14:05:15 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:05:15.698710 | orchestrator | 2025-11-01 14:05:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:05:18.731902 | orchestrator | 2025-11-01 14:05:18 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:05:18.736086 | orchestrator | 2025-11-01 14:05:18 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:05:18.737766 | orchestrator | 2025-11-01 14:05:18 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:05:18.740607 | orchestrator | 2025-11-01 14:05:18 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:05:18.742721 | orchestrator | 2025-11-01 14:05:18 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:05:18.742743 | orchestrator | 2025-11-01 14:05:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:05:21.786385 | orchestrator | 2025-11-01 14:05:21 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:05:21.786528 | orchestrator | 2025-11-01 14:05:21 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:05:21.786547 | orchestrator | 2025-11-01 14:05:21 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:05:21.786560 | orchestrator | 2025-11-01 14:05:21 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:05:21.786571 | orchestrator | 2025-11-01 14:05:21 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:05:21.786583 | orchestrator | 2025-11-01 14:05:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:05:24.860707 | orchestrator | 2025-11-01 14:05:24 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:05:24.861809 | orchestrator | 2025-11-01 14:05:24 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:05:24.863193 | orchestrator | 2025-11-01 14:05:24 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:05:24.864458 | orchestrator | 2025-11-01 14:05:24 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:05:24.865498 | orchestrator | 2025-11-01 14:05:24 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:05:24.865765 | orchestrator | 2025-11-01 14:05:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:05:27.913615 | orchestrator | 2025-11-01 14:05:27 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:05:27.913715 | orchestrator | 2025-11-01 14:05:27 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:05:27.913729 | orchestrator | 2025-11-01 14:05:27 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:05:27.913767 | orchestrator | 2025-11-01 14:05:27 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:05:27.914566 | orchestrator | 2025-11-01 14:05:27 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:05:27.914593 | orchestrator | 2025-11-01 14:05:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:05:30.981689 | orchestrator | 2025-11-01 14:05:30 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:05:30.981905 | orchestrator | 2025-11-01 14:05:30 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state STARTED 2025-11-01 14:05:30.982783 | orchestrator | 2025-11-01 14:05:30 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:05:30.983698 | orchestrator | 2025-11-01 14:05:30 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:05:30.984805 | orchestrator | 2025-11-01 14:05:30 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:05:30.984826 | orchestrator | 2025-11-01 14:05:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:05:34.039050 | orchestrator | 2025-11-01 14:05:34 | INFO  | Task f43edd62-cd3c-4025-8536-9b38bb41d8f3 is in state STARTED 2025-11-01 14:05:34.042013 | orchestrator | 2025-11-01 14:05:34 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:05:34.043427 | orchestrator | 2025-11-01 14:05:34.043492 | orchestrator | 2025-11-01 14:05:34 | INFO  | Task b6a64540-89c6-4cb7-9a1f-dc6ba185c825 is in state SUCCESS 2025-11-01 14:05:34.044787 | orchestrator | 2025-11-01 14:05:34.044816 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-11-01 14:05:34.044828 | orchestrator | 2025-11-01 14:05:34.044840 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-11-01 14:05:34.044852 | orchestrator | Saturday 01 November 2025 14:01:03 +0000 (0:00:00.201) 0:00:00.201 ***** 2025-11-01 14:05:34.044863 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:05:34.044875 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:05:34.044886 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:05:34.044897 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:05:34.044908 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:05:34.044919 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:05:34.044930 | orchestrator | 2025-11-01 14:05:34.044941 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-11-01 14:05:34.045034 | orchestrator | Saturday 01 November 2025 14:01:04 +0000 (0:00:00.898) 0:00:01.100 ***** 2025-11-01 14:05:34.045050 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:05:34.045063 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:05:34.045074 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:05:34.045085 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:05:34.045096 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:05:34.045107 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:05:34.045118 | orchestrator | 2025-11-01 14:05:34.045129 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-11-01 14:05:34.045140 | orchestrator | Saturday 01 November 2025 14:01:05 +0000 (0:00:00.890) 0:00:01.990 ***** 2025-11-01 14:05:34.045151 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:05:34.045162 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:05:34.045172 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:05:34.045183 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:05:34.045194 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:05:34.045205 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:05:34.045216 | orchestrator | 2025-11-01 14:05:34.045227 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-11-01 14:05:34.045238 | orchestrator | Saturday 01 November 2025 14:01:06 +0000 (0:00:00.981) 0:00:02.972 ***** 2025-11-01 14:05:34.045274 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:05:34.045285 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:05:34.045296 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:05:34.045307 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:05:34.045317 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:05:34.045329 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:05:34.045340 | orchestrator | 2025-11-01 14:05:34.045351 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-11-01 14:05:34.045362 | orchestrator | Saturday 01 November 2025 14:01:09 +0000 (0:00:03.197) 0:00:06.169 ***** 2025-11-01 14:05:34.045372 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:05:34.045383 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:05:34.045394 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:05:34.045404 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:05:34.045415 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:05:34.045426 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:05:34.045471 | orchestrator | 2025-11-01 14:05:34.045488 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-11-01 14:05:34.045507 | orchestrator | Saturday 01 November 2025 14:01:10 +0000 (0:00:01.392) 0:00:07.562 ***** 2025-11-01 14:05:34.045527 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:05:34.045545 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:05:34.045564 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:05:34.045576 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:05:34.045586 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:05:34.045597 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:05:34.045608 | orchestrator | 2025-11-01 14:05:34.045618 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-11-01 14:05:34.045629 | orchestrator | Saturday 01 November 2025 14:01:14 +0000 (0:00:03.280) 0:00:10.842 ***** 2025-11-01 14:05:34.045640 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:05:34.045651 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:05:34.045661 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:05:34.045674 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:05:34.045686 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:05:34.045699 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:05:34.045712 | orchestrator | 2025-11-01 14:05:34.045725 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-11-01 14:05:34.045738 | orchestrator | Saturday 01 November 2025 14:01:14 +0000 (0:00:00.865) 0:00:11.708 ***** 2025-11-01 14:05:34.045750 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:05:34.045763 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:05:34.045775 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:05:34.045787 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:05:34.045799 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:05:34.045811 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:05:34.045824 | orchestrator | 2025-11-01 14:05:34.045836 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-11-01 14:05:34.045864 | orchestrator | Saturday 01 November 2025 14:01:15 +0000 (0:00:00.948) 0:00:12.656 ***** 2025-11-01 14:05:34.045877 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-01 14:05:34.045890 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-01 14:05:34.045904 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:05:34.045917 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-01 14:05:34.045930 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-01 14:05:34.045942 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:05:34.045955 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-01 14:05:34.045967 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-01 14:05:34.045980 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:05:34.046002 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-01 14:05:34.046072 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-01 14:05:34.046086 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:05:34.046097 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-01 14:05:34.046108 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-01 14:05:34.046119 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:05:34.046130 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-01 14:05:34.046140 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-01 14:05:34.046151 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:05:34.046162 | orchestrator | 2025-11-01 14:05:34.046172 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-11-01 14:05:34.046183 | orchestrator | Saturday 01 November 2025 14:01:16 +0000 (0:00:00.788) 0:00:13.444 ***** 2025-11-01 14:05:34.046194 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:05:34.046205 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:05:34.046215 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:05:34.046226 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:05:34.046237 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:05:34.046248 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:05:34.046259 | orchestrator | 2025-11-01 14:05:34.046269 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-11-01 14:05:34.046282 | orchestrator | Saturday 01 November 2025 14:01:18 +0000 (0:00:01.633) 0:00:15.078 ***** 2025-11-01 14:05:34.046293 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:05:34.046303 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:05:34.046314 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:05:34.046325 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:05:34.046336 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:05:34.046346 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:05:34.046357 | orchestrator | 2025-11-01 14:05:34.046368 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-11-01 14:05:34.046379 | orchestrator | Saturday 01 November 2025 14:01:20 +0000 (0:00:01.901) 0:00:16.979 ***** 2025-11-01 14:05:34.046390 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:05:34.046401 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:05:34.046411 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:05:34.046422 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:05:34.046461 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:05:34.046473 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:05:34.046484 | orchestrator | 2025-11-01 14:05:34.046495 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-11-01 14:05:34.046506 | orchestrator | Saturday 01 November 2025 14:01:26 +0000 (0:00:06.797) 0:00:23.777 ***** 2025-11-01 14:05:34.046516 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:05:34.046527 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:05:34.046537 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:05:34.046548 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:05:34.046559 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:05:34.046569 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:05:34.046580 | orchestrator | 2025-11-01 14:05:34.046591 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-11-01 14:05:34.046602 | orchestrator | Saturday 01 November 2025 14:01:31 +0000 (0:00:04.589) 0:00:28.367 ***** 2025-11-01 14:05:34.046612 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:05:34.046623 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:05:34.046634 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:05:34.046644 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:05:34.046655 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:05:34.046673 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:05:34.046684 | orchestrator | 2025-11-01 14:05:34.046695 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-11-01 14:05:34.046707 | orchestrator | Saturday 01 November 2025 14:01:33 +0000 (0:00:01.446) 0:00:29.813 ***** 2025-11-01 14:05:34.046718 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:05:34.046728 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:05:34.046739 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:05:34.046749 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:05:34.046857 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:05:34.046872 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:05:34.046883 | orchestrator | 2025-11-01 14:05:34.046894 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-11-01 14:05:34.046905 | orchestrator | Saturday 01 November 2025 14:01:33 +0000 (0:00:00.543) 0:00:30.356 ***** 2025-11-01 14:05:34.046916 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-11-01 14:05:34.046928 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-11-01 14:05:34.046939 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:05:34.046949 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-11-01 14:05:34.046968 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-11-01 14:05:34.046979 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:05:34.046990 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-11-01 14:05:34.047001 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-11-01 14:05:34.047011 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:05:34.047023 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-11-01 14:05:34.047033 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-11-01 14:05:34.047044 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-11-01 14:05:34.047054 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-11-01 14:05:34.047065 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:05:34.047076 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:05:34.047087 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-11-01 14:05:34.047098 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-11-01 14:05:34.047109 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:05:34.047119 | orchestrator | 2025-11-01 14:05:34.047130 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-11-01 14:05:34.047149 | orchestrator | Saturday 01 November 2025 14:01:34 +0000 (0:00:01.032) 0:00:31.389 ***** 2025-11-01 14:05:34.047160 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:05:34.047171 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:05:34.047181 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:05:34.047192 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:05:34.047203 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:05:34.047213 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:05:34.047224 | orchestrator | 2025-11-01 14:05:34.047235 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2025-11-01 14:05:34.047246 | orchestrator | Saturday 01 November 2025 14:01:35 +0000 (0:00:00.726) 0:00:32.115 ***** 2025-11-01 14:05:34.047256 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:05:34.047267 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:05:34.047278 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:05:34.047288 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:05:34.047299 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:05:34.047310 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:05:34.047320 | orchestrator | 2025-11-01 14:05:34.047331 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-11-01 14:05:34.047342 | orchestrator | 2025-11-01 14:05:34.047353 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-11-01 14:05:34.047372 | orchestrator | Saturday 01 November 2025 14:01:36 +0000 (0:00:01.573) 0:00:33.689 ***** 2025-11-01 14:05:34.047383 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:05:34.047393 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:05:34.047404 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:05:34.047415 | orchestrator | 2025-11-01 14:05:34.047426 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-11-01 14:05:34.047481 | orchestrator | Saturday 01 November 2025 14:01:38 +0000 (0:00:01.747) 0:00:35.437 ***** 2025-11-01 14:05:34.047494 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:05:34.047507 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:05:34.047520 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:05:34.047532 | orchestrator | 2025-11-01 14:05:34.047545 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-11-01 14:05:34.047558 | orchestrator | Saturday 01 November 2025 14:01:40 +0000 (0:00:01.531) 0:00:36.969 ***** 2025-11-01 14:05:34.047570 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:05:34.047583 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:05:34.047595 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:05:34.047607 | orchestrator | 2025-11-01 14:05:34.047620 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-11-01 14:05:34.047632 | orchestrator | Saturday 01 November 2025 14:01:41 +0000 (0:00:01.410) 0:00:38.380 ***** 2025-11-01 14:05:34.047644 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:05:34.047745 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:05:34.047762 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:05:34.047774 | orchestrator | 2025-11-01 14:05:34.047787 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-11-01 14:05:34.047801 | orchestrator | Saturday 01 November 2025 14:01:43 +0000 (0:00:01.522) 0:00:39.902 ***** 2025-11-01 14:05:34.047813 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:05:34.047826 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:05:34.047838 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:05:34.047851 | orchestrator | 2025-11-01 14:05:34.047862 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-11-01 14:05:34.047873 | orchestrator | Saturday 01 November 2025 14:01:43 +0000 (0:00:00.362) 0:00:40.265 ***** 2025-11-01 14:05:34.047884 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:05:34.047894 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:05:34.047905 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:05:34.047916 | orchestrator | 2025-11-01 14:05:34.047927 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-11-01 14:05:34.047937 | orchestrator | Saturday 01 November 2025 14:01:44 +0000 (0:00:01.494) 0:00:41.760 ***** 2025-11-01 14:05:34.047948 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:05:34.047959 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:05:34.047969 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:05:34.047980 | orchestrator | 2025-11-01 14:05:34.047991 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-11-01 14:05:34.048002 | orchestrator | Saturday 01 November 2025 14:01:47 +0000 (0:00:02.039) 0:00:43.800 ***** 2025-11-01 14:05:34.048012 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:05:34.048023 | orchestrator | 2025-11-01 14:05:34.048034 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-11-01 14:05:34.048045 | orchestrator | Saturday 01 November 2025 14:01:47 +0000 (0:00:00.873) 0:00:44.673 ***** 2025-11-01 14:05:34.048056 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:05:34.048066 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:05:34.048077 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:05:34.048088 | orchestrator | 2025-11-01 14:05:34.048099 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-11-01 14:05:34.048115 | orchestrator | Saturday 01 November 2025 14:01:52 +0000 (0:00:04.121) 0:00:48.794 ***** 2025-11-01 14:05:34.048127 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:05:34.048137 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:05:34.048154 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:05:34.048165 | orchestrator | 2025-11-01 14:05:34.048176 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-11-01 14:05:34.048187 | orchestrator | Saturday 01 November 2025 14:01:53 +0000 (0:00:01.056) 0:00:49.850 ***** 2025-11-01 14:05:34.048197 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:05:34.048208 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:05:34.048219 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:05:34.048229 | orchestrator | 2025-11-01 14:05:34.048240 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-11-01 14:05:34.048251 | orchestrator | Saturday 01 November 2025 14:01:54 +0000 (0:00:01.251) 0:00:51.102 ***** 2025-11-01 14:05:34.048262 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:05:34.048272 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:05:34.048283 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:05:34.048294 | orchestrator | 2025-11-01 14:05:34.048304 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-11-01 14:05:34.048323 | orchestrator | Saturday 01 November 2025 14:01:55 +0000 (0:00:01.621) 0:00:52.723 ***** 2025-11-01 14:05:34.048334 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:05:34.048345 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:05:34.048356 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:05:34.048366 | orchestrator | 2025-11-01 14:05:34.048377 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-11-01 14:05:34.048388 | orchestrator | Saturday 01 November 2025 14:01:57 +0000 (0:00:01.369) 0:00:54.092 ***** 2025-11-01 14:05:34.048399 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:05:34.048409 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:05:34.048420 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:05:34.048448 | orchestrator | 2025-11-01 14:05:34.048460 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-11-01 14:05:34.048471 | orchestrator | Saturday 01 November 2025 14:01:57 +0000 (0:00:00.674) 0:00:54.767 ***** 2025-11-01 14:05:34.048481 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:05:34.048492 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:05:34.048503 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:05:34.048513 | orchestrator | 2025-11-01 14:05:34.048524 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2025-11-01 14:05:34.048535 | orchestrator | Saturday 01 November 2025 14:01:59 +0000 (0:00:01.870) 0:00:56.638 ***** 2025-11-01 14:05:34.048546 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:05:34.048557 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:05:34.048567 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:05:34.048578 | orchestrator | 2025-11-01 14:05:34.048589 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2025-11-01 14:05:34.048600 | orchestrator | Saturday 01 November 2025 14:02:03 +0000 (0:00:03.152) 0:00:59.791 ***** 2025-11-01 14:05:34.048611 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:05:34.048621 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:05:34.048632 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:05:34.048643 | orchestrator | 2025-11-01 14:05:34.048654 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-11-01 14:05:34.048665 | orchestrator | Saturday 01 November 2025 14:02:04 +0000 (0:00:01.389) 0:01:01.180 ***** 2025-11-01 14:05:34.048676 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-11-01 14:05:34.048687 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-11-01 14:05:34.048698 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-11-01 14:05:34.048709 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-11-01 14:05:34.048727 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-11-01 14:05:34.048738 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-11-01 14:05:34.048749 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-11-01 14:05:34.048760 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-11-01 14:05:34.048770 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-11-01 14:05:34.048782 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-11-01 14:05:34.048792 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-11-01 14:05:34.048803 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-11-01 14:05:34.048814 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:05:34.048825 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:05:34.048835 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:05:34.048846 | orchestrator | 2025-11-01 14:05:34.048862 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-11-01 14:05:34.048873 | orchestrator | Saturday 01 November 2025 14:02:48 +0000 (0:00:43.789) 0:01:44.970 ***** 2025-11-01 14:05:34.048884 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:05:34.048895 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:05:34.048906 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:05:34.048916 | orchestrator | 2025-11-01 14:05:34.048927 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-11-01 14:05:34.048938 | orchestrator | Saturday 01 November 2025 14:02:48 +0000 (0:00:00.334) 0:01:45.304 ***** 2025-11-01 14:05:34.048949 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:05:34.048959 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:05:34.048970 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:05:34.048981 | orchestrator | 2025-11-01 14:05:34.048992 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-11-01 14:05:34.049003 | orchestrator | Saturday 01 November 2025 14:02:49 +0000 (0:00:01.003) 0:01:46.308 ***** 2025-11-01 14:05:34.049013 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:05:34.049024 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:05:34.049035 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:05:34.049046 | orchestrator | 2025-11-01 14:05:34.049062 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-11-01 14:05:34.049073 | orchestrator | Saturday 01 November 2025 14:02:50 +0000 (0:00:01.326) 0:01:47.634 ***** 2025-11-01 14:05:34.049084 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:05:34.049095 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:05:34.049105 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:05:34.049116 | orchestrator | 2025-11-01 14:05:34.049127 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-11-01 14:05:34.049138 | orchestrator | Saturday 01 November 2025 14:03:18 +0000 (0:00:27.272) 0:02:14.907 ***** 2025-11-01 14:05:34.049148 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:05:34.049159 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:05:34.049170 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:05:34.049181 | orchestrator | 2025-11-01 14:05:34.049191 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-11-01 14:05:34.049202 | orchestrator | Saturday 01 November 2025 14:03:18 +0000 (0:00:00.702) 0:02:15.609 ***** 2025-11-01 14:05:34.049219 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:05:34.049230 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:05:34.049241 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:05:34.049252 | orchestrator | 2025-11-01 14:05:34.049263 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-11-01 14:05:34.049274 | orchestrator | Saturday 01 November 2025 14:03:19 +0000 (0:00:00.648) 0:02:16.258 ***** 2025-11-01 14:05:34.049284 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:05:34.049295 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:05:34.049306 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:05:34.049317 | orchestrator | 2025-11-01 14:05:34.049328 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-11-01 14:05:34.049338 | orchestrator | Saturday 01 November 2025 14:03:20 +0000 (0:00:00.696) 0:02:16.954 ***** 2025-11-01 14:05:34.049349 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:05:34.049360 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:05:34.049370 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:05:34.049381 | orchestrator | 2025-11-01 14:05:34.049392 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-11-01 14:05:34.049403 | orchestrator | Saturday 01 November 2025 14:03:21 +0000 (0:00:00.947) 0:02:17.902 ***** 2025-11-01 14:05:34.049414 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:05:34.049424 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:05:34.049484 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:05:34.049496 | orchestrator | 2025-11-01 14:05:34.049507 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-11-01 14:05:34.049518 | orchestrator | Saturday 01 November 2025 14:03:21 +0000 (0:00:00.328) 0:02:18.230 ***** 2025-11-01 14:05:34.049529 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:05:34.049540 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:05:34.049551 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:05:34.049561 | orchestrator | 2025-11-01 14:05:34.049572 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-11-01 14:05:34.049583 | orchestrator | Saturday 01 November 2025 14:03:22 +0000 (0:00:00.695) 0:02:18.926 ***** 2025-11-01 14:05:34.049594 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:05:34.049605 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:05:34.049615 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:05:34.049626 | orchestrator | 2025-11-01 14:05:34.049637 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-11-01 14:05:34.049648 | orchestrator | Saturday 01 November 2025 14:03:22 +0000 (0:00:00.679) 0:02:19.605 ***** 2025-11-01 14:05:34.049659 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:05:34.049670 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:05:34.049680 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:05:34.049691 | orchestrator | 2025-11-01 14:05:34.049701 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-11-01 14:05:34.049710 | orchestrator | Saturday 01 November 2025 14:03:24 +0000 (0:00:01.224) 0:02:20.830 ***** 2025-11-01 14:05:34.049720 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:05:34.049730 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:05:34.049739 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:05:34.049749 | orchestrator | 2025-11-01 14:05:34.049759 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-11-01 14:05:34.049768 | orchestrator | Saturday 01 November 2025 14:03:24 +0000 (0:00:00.909) 0:02:21.740 ***** 2025-11-01 14:05:34.049778 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:05:34.049787 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:05:34.049797 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:05:34.049806 | orchestrator | 2025-11-01 14:05:34.049816 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-11-01 14:05:34.049826 | orchestrator | Saturday 01 November 2025 14:03:25 +0000 (0:00:00.521) 0:02:22.261 ***** 2025-11-01 14:05:34.049835 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:05:34.049851 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:05:34.049861 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:05:34.049871 | orchestrator | 2025-11-01 14:05:34.049885 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-11-01 14:05:34.049895 | orchestrator | Saturday 01 November 2025 14:03:25 +0000 (0:00:00.499) 0:02:22.760 ***** 2025-11-01 14:05:34.049905 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:05:34.049914 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:05:34.049924 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:05:34.049933 | orchestrator | 2025-11-01 14:05:34.049943 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-11-01 14:05:34.049953 | orchestrator | Saturday 01 November 2025 14:03:27 +0000 (0:00:01.344) 0:02:24.105 ***** 2025-11-01 14:05:34.049962 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:05:34.049972 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:05:34.049981 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:05:34.049991 | orchestrator | 2025-11-01 14:05:34.050000 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-11-01 14:05:34.050010 | orchestrator | Saturday 01 November 2025 14:03:28 +0000 (0:00:00.835) 0:02:24.940 ***** 2025-11-01 14:05:34.050046 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-11-01 14:05:34.050062 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-11-01 14:05:34.050072 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-11-01 14:05:34.050082 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-11-01 14:05:34.050092 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-11-01 14:05:34.050101 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-11-01 14:05:34.050111 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-11-01 14:05:34.050120 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-11-01 14:05:34.050130 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-11-01 14:05:34.050139 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-11-01 14:05:34.050149 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-11-01 14:05:34.050159 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-11-01 14:05:34.050168 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-11-01 14:05:34.050178 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-11-01 14:05:34.050187 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-11-01 14:05:34.050197 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-11-01 14:05:34.050206 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-11-01 14:05:34.050216 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-11-01 14:05:34.050226 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-11-01 14:05:34.050235 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-11-01 14:05:34.050245 | orchestrator | 2025-11-01 14:05:34.050255 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-11-01 14:05:34.050265 | orchestrator | 2025-11-01 14:05:34.050274 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-11-01 14:05:34.050290 | orchestrator | Saturday 01 November 2025 14:03:31 +0000 (0:00:03.432) 0:02:28.373 ***** 2025-11-01 14:05:34.050300 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:05:34.050310 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:05:34.050319 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:05:34.050329 | orchestrator | 2025-11-01 14:05:34.050339 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-11-01 14:05:34.050348 | orchestrator | Saturday 01 November 2025 14:03:31 +0000 (0:00:00.385) 0:02:28.758 ***** 2025-11-01 14:05:34.050358 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:05:34.050367 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:05:34.050377 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:05:34.050386 | orchestrator | 2025-11-01 14:05:34.050396 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-11-01 14:05:34.050406 | orchestrator | Saturday 01 November 2025 14:03:33 +0000 (0:00:01.368) 0:02:30.127 ***** 2025-11-01 14:05:34.050415 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:05:34.050424 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:05:34.050451 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:05:34.050461 | orchestrator | 2025-11-01 14:05:34.050471 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-11-01 14:05:34.050480 | orchestrator | Saturday 01 November 2025 14:03:33 +0000 (0:00:00.264) 0:02:30.391 ***** 2025-11-01 14:05:34.050490 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:05:34.050500 | orchestrator | 2025-11-01 14:05:34.050510 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-11-01 14:05:34.050520 | orchestrator | Saturday 01 November 2025 14:03:34 +0000 (0:00:00.725) 0:02:31.117 ***** 2025-11-01 14:05:34.050529 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:05:34.050539 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:05:34.050559 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:05:34.050569 | orchestrator | 2025-11-01 14:05:34.050579 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-11-01 14:05:34.050589 | orchestrator | Saturday 01 November 2025 14:03:34 +0000 (0:00:00.371) 0:02:31.488 ***** 2025-11-01 14:05:34.050598 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:05:34.050608 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:05:34.050618 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:05:34.050627 | orchestrator | 2025-11-01 14:05:34.050637 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-11-01 14:05:34.050732 | orchestrator | Saturday 01 November 2025 14:03:35 +0000 (0:00:00.373) 0:02:31.861 ***** 2025-11-01 14:05:34.050743 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:05:34.050753 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:05:34.050762 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:05:34.050772 | orchestrator | 2025-11-01 14:05:34.050782 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-11-01 14:05:34.050792 | orchestrator | Saturday 01 November 2025 14:03:35 +0000 (0:00:00.376) 0:02:32.238 ***** 2025-11-01 14:05:34.050801 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:05:34.050811 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:05:34.050821 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:05:34.050830 | orchestrator | 2025-11-01 14:05:34.050846 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-11-01 14:05:34.050856 | orchestrator | Saturday 01 November 2025 14:03:36 +0000 (0:00:00.877) 0:02:33.115 ***** 2025-11-01 14:05:34.050866 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:05:34.050876 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:05:34.050885 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:05:34.050895 | orchestrator | 2025-11-01 14:05:34.050904 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-11-01 14:05:34.050914 | orchestrator | Saturday 01 November 2025 14:03:37 +0000 (0:00:00.991) 0:02:34.107 ***** 2025-11-01 14:05:34.050931 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:05:34.050941 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:05:34.050950 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:05:34.050960 | orchestrator | 2025-11-01 14:05:34.050970 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-11-01 14:05:34.050980 | orchestrator | Saturday 01 November 2025 14:03:38 +0000 (0:00:01.139) 0:02:35.247 ***** 2025-11-01 14:05:34.050989 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:05:34.050999 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:05:34.051009 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:05:34.051018 | orchestrator | 2025-11-01 14:05:34.051028 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-11-01 14:05:34.051037 | orchestrator | 2025-11-01 14:05:34.051047 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-11-01 14:05:34.051057 | orchestrator | Saturday 01 November 2025 14:03:48 +0000 (0:00:10.393) 0:02:45.640 ***** 2025-11-01 14:05:34.051066 | orchestrator | ok: [testbed-manager] 2025-11-01 14:05:34.051076 | orchestrator | 2025-11-01 14:05:34.051086 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-11-01 14:05:34.051095 | orchestrator | Saturday 01 November 2025 14:03:49 +0000 (0:00:01.093) 0:02:46.733 ***** 2025-11-01 14:05:34.051105 | orchestrator | changed: [testbed-manager] 2025-11-01 14:05:34.051115 | orchestrator | 2025-11-01 14:05:34.051124 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-11-01 14:05:34.051134 | orchestrator | Saturday 01 November 2025 14:03:50 +0000 (0:00:00.524) 0:02:47.258 ***** 2025-11-01 14:05:34.051144 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-11-01 14:05:34.051153 | orchestrator | 2025-11-01 14:05:34.051163 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-11-01 14:05:34.051173 | orchestrator | Saturday 01 November 2025 14:03:51 +0000 (0:00:00.649) 0:02:47.907 ***** 2025-11-01 14:05:34.051182 | orchestrator | changed: [testbed-manager] 2025-11-01 14:05:34.051192 | orchestrator | 2025-11-01 14:05:34.051202 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-11-01 14:05:34.051211 | orchestrator | Saturday 01 November 2025 14:03:52 +0000 (0:00:01.033) 0:02:48.941 ***** 2025-11-01 14:05:34.051221 | orchestrator | changed: [testbed-manager] 2025-11-01 14:05:34.051231 | orchestrator | 2025-11-01 14:05:34.051240 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-11-01 14:05:34.051250 | orchestrator | Saturday 01 November 2025 14:03:52 +0000 (0:00:00.660) 0:02:49.601 ***** 2025-11-01 14:05:34.051260 | orchestrator | changed: [testbed-manager -> localhost] 2025-11-01 14:05:34.051269 | orchestrator | 2025-11-01 14:05:34.051279 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-11-01 14:05:34.051288 | orchestrator | Saturday 01 November 2025 14:03:55 +0000 (0:00:02.267) 0:02:51.869 ***** 2025-11-01 14:05:34.051298 | orchestrator | changed: [testbed-manager -> localhost] 2025-11-01 14:05:34.051308 | orchestrator | 2025-11-01 14:05:34.051317 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-11-01 14:05:34.051327 | orchestrator | Saturday 01 November 2025 14:03:56 +0000 (0:00:01.057) 0:02:52.926 ***** 2025-11-01 14:05:34.051337 | orchestrator | changed: [testbed-manager] 2025-11-01 14:05:34.051346 | orchestrator | 2025-11-01 14:05:34.051356 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-11-01 14:05:34.051368 | orchestrator | Saturday 01 November 2025 14:03:56 +0000 (0:00:00.591) 0:02:53.517 ***** 2025-11-01 14:05:34.051379 | orchestrator | changed: [testbed-manager] 2025-11-01 14:05:34.051390 | orchestrator | 2025-11-01 14:05:34.051401 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-11-01 14:05:34.051412 | orchestrator | 2025-11-01 14:05:34.051424 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-11-01 14:05:34.051453 | orchestrator | Saturday 01 November 2025 14:03:57 +0000 (0:00:00.848) 0:02:54.365 ***** 2025-11-01 14:05:34.051472 | orchestrator | ok: [testbed-manager] 2025-11-01 14:05:34.051483 | orchestrator | 2025-11-01 14:05:34.051494 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-11-01 14:05:34.051505 | orchestrator | Saturday 01 November 2025 14:03:57 +0000 (0:00:00.167) 0:02:54.533 ***** 2025-11-01 14:05:34.051522 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-11-01 14:05:34.051533 | orchestrator | 2025-11-01 14:05:34.051545 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-11-01 14:05:34.051556 | orchestrator | Saturday 01 November 2025 14:03:58 +0000 (0:00:00.331) 0:02:54.864 ***** 2025-11-01 14:05:34.051567 | orchestrator | ok: [testbed-manager] 2025-11-01 14:05:34.051577 | orchestrator | 2025-11-01 14:05:34.051589 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-11-01 14:05:34.051600 | orchestrator | Saturday 01 November 2025 14:03:59 +0000 (0:00:01.042) 0:02:55.907 ***** 2025-11-01 14:05:34.051611 | orchestrator | ok: [testbed-manager] 2025-11-01 14:05:34.051622 | orchestrator | 2025-11-01 14:05:34.051633 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-11-01 14:05:34.051644 | orchestrator | Saturday 01 November 2025 14:04:00 +0000 (0:00:01.854) 0:02:57.761 ***** 2025-11-01 14:05:34.051655 | orchestrator | changed: [testbed-manager] 2025-11-01 14:05:34.051666 | orchestrator | 2025-11-01 14:05:34.051677 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-11-01 14:05:34.051689 | orchestrator | Saturday 01 November 2025 14:04:01 +0000 (0:00:00.880) 0:02:58.641 ***** 2025-11-01 14:05:34.051700 | orchestrator | ok: [testbed-manager] 2025-11-01 14:05:34.051712 | orchestrator | 2025-11-01 14:05:34.051727 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-11-01 14:05:34.051737 | orchestrator | Saturday 01 November 2025 14:04:02 +0000 (0:00:00.557) 0:02:59.199 ***** 2025-11-01 14:05:34.051747 | orchestrator | changed: [testbed-manager] 2025-11-01 14:05:34.051756 | orchestrator | 2025-11-01 14:05:34.051766 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-11-01 14:05:34.051776 | orchestrator | Saturday 01 November 2025 14:04:12 +0000 (0:00:10.156) 0:03:09.355 ***** 2025-11-01 14:05:34.051785 | orchestrator | changed: [testbed-manager] 2025-11-01 14:05:34.051795 | orchestrator | 2025-11-01 14:05:34.051805 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-11-01 14:05:34.051814 | orchestrator | Saturday 01 November 2025 14:04:27 +0000 (0:00:15.247) 0:03:24.603 ***** 2025-11-01 14:05:34.051824 | orchestrator | ok: [testbed-manager] 2025-11-01 14:05:34.051834 | orchestrator | 2025-11-01 14:05:34.051843 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-11-01 14:05:34.051853 | orchestrator | 2025-11-01 14:05:34.051863 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-11-01 14:05:34.051872 | orchestrator | Saturday 01 November 2025 14:04:28 +0000 (0:00:00.632) 0:03:25.236 ***** 2025-11-01 14:05:34.051882 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:05:34.051892 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:05:34.051901 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:05:34.051911 | orchestrator | 2025-11-01 14:05:34.051921 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-11-01 14:05:34.051930 | orchestrator | Saturday 01 November 2025 14:04:28 +0000 (0:00:00.386) 0:03:25.623 ***** 2025-11-01 14:05:34.051940 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:05:34.051950 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:05:34.051959 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:05:34.051969 | orchestrator | 2025-11-01 14:05:34.051979 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-11-01 14:05:34.051988 | orchestrator | Saturday 01 November 2025 14:04:29 +0000 (0:00:00.444) 0:03:26.067 ***** 2025-11-01 14:05:34.051998 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:05:34.052015 | orchestrator | 2025-11-01 14:05:34.052025 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-11-01 14:05:34.052035 | orchestrator | Saturday 01 November 2025 14:04:30 +0000 (0:00:00.802) 0:03:26.870 ***** 2025-11-01 14:05:34.052045 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-01 14:05:34.052054 | orchestrator | 2025-11-01 14:05:34.052064 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-11-01 14:05:34.052074 | orchestrator | Saturday 01 November 2025 14:04:31 +0000 (0:00:01.106) 0:03:27.977 ***** 2025-11-01 14:05:34.052083 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:05:34.052093 | orchestrator | 2025-11-01 14:05:34.052102 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-11-01 14:05:34.052112 | orchestrator | Saturday 01 November 2025 14:04:31 +0000 (0:00:00.157) 0:03:28.134 ***** 2025-11-01 14:05:34.052122 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-01 14:05:34.052131 | orchestrator | 2025-11-01 14:05:34.052141 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-11-01 14:05:34.052151 | orchestrator | Saturday 01 November 2025 14:04:32 +0000 (0:00:01.133) 0:03:29.267 ***** 2025-11-01 14:05:34.052160 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:05:34.052170 | orchestrator | 2025-11-01 14:05:34.052179 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-11-01 14:05:34.052189 | orchestrator | Saturday 01 November 2025 14:04:32 +0000 (0:00:00.193) 0:03:29.461 ***** 2025-11-01 14:05:34.052199 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:05:34.052208 | orchestrator | 2025-11-01 14:05:34.052218 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-11-01 14:05:34.052228 | orchestrator | Saturday 01 November 2025 14:04:32 +0000 (0:00:00.216) 0:03:29.677 ***** 2025-11-01 14:05:34.052237 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:05:34.052247 | orchestrator | 2025-11-01 14:05:34.052257 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-11-01 14:05:34.052266 | orchestrator | Saturday 01 November 2025 14:04:33 +0000 (0:00:00.143) 0:03:29.821 ***** 2025-11-01 14:05:34.052276 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:05:34.052286 | orchestrator | 2025-11-01 14:05:34.052295 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-11-01 14:05:34.052305 | orchestrator | Saturday 01 November 2025 14:04:33 +0000 (0:00:00.123) 0:03:29.945 ***** 2025-11-01 14:05:34.052314 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-11-01 14:05:34.052324 | orchestrator | 2025-11-01 14:05:34.052334 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-11-01 14:05:34.052348 | orchestrator | Saturday 01 November 2025 14:04:40 +0000 (0:00:07.265) 0:03:37.210 ***** 2025-11-01 14:05:34.052358 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-11-01 14:05:34.052368 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-11-01 14:05:34.052377 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-11-01 14:05:34.052387 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-11-01 14:05:34.052397 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-11-01 14:05:34.052406 | orchestrator | 2025-11-01 14:05:34.052416 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-11-01 14:05:34.052426 | orchestrator | Saturday 01 November 2025 14:05:27 +0000 (0:00:47.305) 0:04:24.516 ***** 2025-11-01 14:05:34.052480 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-01 14:05:34.052491 | orchestrator | 2025-11-01 14:05:34.052500 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-11-01 14:05:34.052510 | orchestrator | Saturday 01 November 2025 14:05:29 +0000 (0:00:01.317) 0:04:25.834 ***** 2025-11-01 14:05:34.052526 | orchestrator | fatal: [testbed-node-0 -> localhost]: FAILED! => {"changed": false, "checksum": "e067333911ec303b1abbababa17374a0629c5a29", "msg": "Destination directory /tmp/k3s does not exist"} 2025-11-01 14:05:34.052544 | orchestrator | 2025-11-01 14:05:34.052554 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:05:34.052563 | orchestrator | testbed-manager : ok=18  changed=10  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:05:34.052574 | orchestrator | testbed-node-0 : ok=43  changed=20  unreachable=0 failed=1  skipped=24  rescued=0 ignored=0 2025-11-01 14:05:34.052585 | orchestrator | testbed-node-1 : ok=35  changed=16  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-11-01 14:05:34.052595 | orchestrator | testbed-node-2 : ok=35  changed=16  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-11-01 14:05:34.052605 | orchestrator | testbed-node-3 : ok=14  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-11-01 14:05:34.052615 | orchestrator | testbed-node-4 : ok=14  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-11-01 14:05:34.052624 | orchestrator | testbed-node-5 : ok=14  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-11-01 14:05:34.052634 | orchestrator | 2025-11-01 14:05:34.052644 | orchestrator | 2025-11-01 14:05:34.052653 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:05:34.052663 | orchestrator | Saturday 01 November 2025 14:05:30 +0000 (0:00:01.733) 0:04:27.567 ***** 2025-11-01 14:05:34.052673 | orchestrator | =============================================================================== 2025-11-01 14:05:34.052683 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 47.31s 2025-11-01 14:05:34.052692 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.79s 2025-11-01 14:05:34.052702 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 27.27s 2025-11-01 14:05:34.052711 | orchestrator | kubectl : Install required packages ------------------------------------ 15.25s 2025-11-01 14:05:34.052721 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.39s 2025-11-01 14:05:34.052731 | orchestrator | kubectl : Add repository Debian ---------------------------------------- 10.16s 2025-11-01 14:05:34.052740 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 7.27s 2025-11-01 14:05:34.052750 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.80s 2025-11-01 14:05:34.052759 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 4.59s 2025-11-01 14:05:34.052769 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 4.12s 2025-11-01 14:05:34.052778 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.43s 2025-11-01 14:05:34.052788 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 3.28s 2025-11-01 14:05:34.052798 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 3.20s 2025-11-01 14:05:34.052807 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 3.15s 2025-11-01 14:05:34.052817 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.27s 2025-11-01 14:05:34.052826 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 2.04s 2025-11-01 14:05:34.052836 | orchestrator | k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries --- 1.90s 2025-11-01 14:05:34.052845 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.87s 2025-11-01 14:05:34.052855 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.85s 2025-11-01 14:05:34.052872 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.75s 2025-11-01 14:05:34.052882 | orchestrator | 2025-11-01 14:05:34 | INFO  | Task a693afc9-42f4-4212-a795-5dd0624c9e84 is in state STARTED 2025-11-01 14:05:34.052892 | orchestrator | 2025-11-01 14:05:34 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:05:34.052902 | orchestrator | 2025-11-01 14:05:34 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:05:34.052911 | orchestrator | 2025-11-01 14:05:34 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:05:34.052920 | orchestrator | 2025-11-01 14:05:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:05:37.101830 | orchestrator | 2025-11-01 14:05:37 | INFO  | Task f43edd62-cd3c-4025-8536-9b38bb41d8f3 is in state STARTED 2025-11-01 14:05:37.101922 | orchestrator | 2025-11-01 14:05:37 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:05:37.104824 | orchestrator | 2025-11-01 14:05:37 | INFO  | Task a693afc9-42f4-4212-a795-5dd0624c9e84 is in state STARTED 2025-11-01 14:05:37.111500 | orchestrator | 2025-11-01 14:05:37 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:05:37.112078 | orchestrator | 2025-11-01 14:05:37 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:05:37.119978 | orchestrator | 2025-11-01 14:05:37 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:05:37.120004 | orchestrator | 2025-11-01 14:05:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:05:40.235494 | orchestrator | 2025-11-01 14:05:40 | INFO  | Task f43edd62-cd3c-4025-8536-9b38bb41d8f3 is in state STARTED 2025-11-01 14:05:40.235591 | orchestrator | 2025-11-01 14:05:40 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:05:40.235620 | orchestrator | 2025-11-01 14:05:40 | INFO  | Task a693afc9-42f4-4212-a795-5dd0624c9e84 is in state STARTED 2025-11-01 14:05:40.235632 | orchestrator | 2025-11-01 14:05:40 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:05:40.235643 | orchestrator | 2025-11-01 14:05:40 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:05:40.235654 | orchestrator | 2025-11-01 14:05:40 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:05:40.235665 | orchestrator | 2025-11-01 14:05:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:05:43.237700 | orchestrator | 2025-11-01 14:05:43 | INFO  | Task f43edd62-cd3c-4025-8536-9b38bb41d8f3 is in state SUCCESS 2025-11-01 14:05:43.239360 | orchestrator | 2025-11-01 14:05:43 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:05:43.241223 | orchestrator | 2025-11-01 14:05:43 | INFO  | Task a693afc9-42f4-4212-a795-5dd0624c9e84 is in state STARTED 2025-11-01 14:05:43.243473 | orchestrator | 2025-11-01 14:05:43 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:05:43.246525 | orchestrator | 2025-11-01 14:05:43 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:05:43.250688 | orchestrator | 2025-11-01 14:05:43 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:05:43.250712 | orchestrator | 2025-11-01 14:05:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:05:46.315383 | orchestrator | 2025-11-01 14:05:46 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:05:46.315967 | orchestrator | 2025-11-01 14:05:46 | INFO  | Task a693afc9-42f4-4212-a795-5dd0624c9e84 is in state SUCCESS 2025-11-01 14:05:46.316420 | orchestrator | 2025-11-01 14:05:46 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:05:46.318382 | orchestrator | 2025-11-01 14:05:46 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:05:46.319125 | orchestrator | 2025-11-01 14:05:46 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:05:46.319148 | orchestrator | 2025-11-01 14:05:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:05:49.383301 | orchestrator | 2025-11-01 14:05:49 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:05:49.388384 | orchestrator | 2025-11-01 14:05:49 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:05:49.391120 | orchestrator | 2025-11-01 14:05:49 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:05:49.395199 | orchestrator | 2025-11-01 14:05:49 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:05:49.395214 | orchestrator | 2025-11-01 14:05:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:05:52.547687 | orchestrator | 2025-11-01 14:05:52 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:05:52.547949 | orchestrator | 2025-11-01 14:05:52 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:05:52.549101 | orchestrator | 2025-11-01 14:05:52 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:05:52.550259 | orchestrator | 2025-11-01 14:05:52 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:05:52.550412 | orchestrator | 2025-11-01 14:05:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:05:55.604257 | orchestrator | 2025-11-01 14:05:55 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:05:55.607622 | orchestrator | 2025-11-01 14:05:55 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:05:55.610710 | orchestrator | 2025-11-01 14:05:55 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:05:55.614202 | orchestrator | 2025-11-01 14:05:55 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:05:55.615759 | orchestrator | 2025-11-01 14:05:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:05:58.657659 | orchestrator | 2025-11-01 14:05:58 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:05:58.659145 | orchestrator | 2025-11-01 14:05:58 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:05:58.660985 | orchestrator | 2025-11-01 14:05:58 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:05:58.662707 | orchestrator | 2025-11-01 14:05:58 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:05:58.662731 | orchestrator | 2025-11-01 14:05:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:06:01.708728 | orchestrator | 2025-11-01 14:06:01 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:06:01.710323 | orchestrator | 2025-11-01 14:06:01 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:06:01.711554 | orchestrator | 2025-11-01 14:06:01 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:06:01.712726 | orchestrator | 2025-11-01 14:06:01 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:06:01.712870 | orchestrator | 2025-11-01 14:06:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:06:04.769429 | orchestrator | 2025-11-01 14:06:04 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:06:04.771914 | orchestrator | 2025-11-01 14:06:04 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:06:04.773355 | orchestrator | 2025-11-01 14:06:04 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:06:04.775330 | orchestrator | 2025-11-01 14:06:04 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:06:04.775360 | orchestrator | 2025-11-01 14:06:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:06:07.826299 | orchestrator | 2025-11-01 14:06:07 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:06:07.829038 | orchestrator | 2025-11-01 14:06:07 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:06:07.832499 | orchestrator | 2025-11-01 14:06:07 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:06:07.835895 | orchestrator | 2025-11-01 14:06:07 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:06:07.835921 | orchestrator | 2025-11-01 14:06:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:06:10.874331 | orchestrator | 2025-11-01 14:06:10 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:06:10.876087 | orchestrator | 2025-11-01 14:06:10 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:06:10.877902 | orchestrator | 2025-11-01 14:06:10 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:06:10.879399 | orchestrator | 2025-11-01 14:06:10 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:06:10.879425 | orchestrator | 2025-11-01 14:06:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:06:13.938112 | orchestrator | 2025-11-01 14:06:13 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:06:13.942691 | orchestrator | 2025-11-01 14:06:13 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:06:13.943831 | orchestrator | 2025-11-01 14:06:13 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:06:13.944811 | orchestrator | 2025-11-01 14:06:13 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:06:13.945573 | orchestrator | 2025-11-01 14:06:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:06:16.983686 | orchestrator | 2025-11-01 14:06:16 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:06:16.984373 | orchestrator | 2025-11-01 14:06:16 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:06:16.984780 | orchestrator | 2025-11-01 14:06:16 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:06:16.987350 | orchestrator | 2025-11-01 14:06:16 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:06:16.987378 | orchestrator | 2025-11-01 14:06:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:06:20.016182 | orchestrator | 2025-11-01 14:06:20 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:06:20.016687 | orchestrator | 2025-11-01 14:06:20 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:06:20.017750 | orchestrator | 2025-11-01 14:06:20 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:06:20.019169 | orchestrator | 2025-11-01 14:06:20 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:06:20.019263 | orchestrator | 2025-11-01 14:06:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:06:23.052156 | orchestrator | 2025-11-01 14:06:23 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:06:23.052841 | orchestrator | 2025-11-01 14:06:23 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:06:23.054314 | orchestrator | 2025-11-01 14:06:23 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:06:23.055563 | orchestrator | 2025-11-01 14:06:23 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:06:23.055803 | orchestrator | 2025-11-01 14:06:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:06:26.101053 | orchestrator | 2025-11-01 14:06:26 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:06:26.101575 | orchestrator | 2025-11-01 14:06:26 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:06:26.102513 | orchestrator | 2025-11-01 14:06:26 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:06:26.103292 | orchestrator | 2025-11-01 14:06:26 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:06:26.103312 | orchestrator | 2025-11-01 14:06:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:06:29.144859 | orchestrator | 2025-11-01 14:06:29 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:06:29.147609 | orchestrator | 2025-11-01 14:06:29 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:06:29.150974 | orchestrator | 2025-11-01 14:06:29 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:06:29.152835 | orchestrator | 2025-11-01 14:06:29 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:06:29.153237 | orchestrator | 2025-11-01 14:06:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:06:32.183343 | orchestrator | 2025-11-01 14:06:32 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:06:32.184567 | orchestrator | 2025-11-01 14:06:32 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:06:32.185706 | orchestrator | 2025-11-01 14:06:32 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:06:32.186508 | orchestrator | 2025-11-01 14:06:32 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:06:32.186707 | orchestrator | 2025-11-01 14:06:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:06:35.229224 | orchestrator | 2025-11-01 14:06:35 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:06:35.231832 | orchestrator | 2025-11-01 14:06:35 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:06:35.234822 | orchestrator | 2025-11-01 14:06:35 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:06:35.237374 | orchestrator | 2025-11-01 14:06:35 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:06:35.237396 | orchestrator | 2025-11-01 14:06:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:06:38.272563 | orchestrator | 2025-11-01 14:06:38 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:06:38.273214 | orchestrator | 2025-11-01 14:06:38 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:06:38.274571 | orchestrator | 2025-11-01 14:06:38 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:06:38.275497 | orchestrator | 2025-11-01 14:06:38 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:06:38.275694 | orchestrator | 2025-11-01 14:06:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:06:41.309173 | orchestrator | 2025-11-01 14:06:41 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:06:41.313251 | orchestrator | 2025-11-01 14:06:41 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state STARTED 2025-11-01 14:06:41.314876 | orchestrator | 2025-11-01 14:06:41 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:06:41.316952 | orchestrator | 2025-11-01 14:06:41 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:06:41.317669 | orchestrator | 2025-11-01 14:06:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:06:44.366231 | orchestrator | 2025-11-01 14:06:44 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:06:44.367627 | orchestrator | 2025-11-01 14:06:44 | INFO  | Task 83a2fcf2-4a8c-4828-a76d-99441ab94546 is in state SUCCESS 2025-11-01 14:06:44.369849 | orchestrator | 2025-11-01 14:06:44.369885 | orchestrator | 2025-11-01 14:06:44.369899 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-11-01 14:06:44.369910 | orchestrator | 2025-11-01 14:06:44.369922 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-11-01 14:06:44.369933 | orchestrator | Saturday 01 November 2025 14:05:37 +0000 (0:00:00.430) 0:00:00.430 ***** 2025-11-01 14:06:44.369945 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-11-01 14:06:44.369956 | orchestrator | 2025-11-01 14:06:44.369967 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-11-01 14:06:44.369978 | orchestrator | Saturday 01 November 2025 14:05:38 +0000 (0:00:00.884) 0:00:01.315 ***** 2025-11-01 14:06:44.369989 | orchestrator | changed: [testbed-manager] 2025-11-01 14:06:44.370000 | orchestrator | 2025-11-01 14:06:44.370011 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-11-01 14:06:44.370067 | orchestrator | Saturday 01 November 2025 14:05:39 +0000 (0:00:01.446) 0:00:02.762 ***** 2025-11-01 14:06:44.370078 | orchestrator | changed: [testbed-manager] 2025-11-01 14:06:44.370089 | orchestrator | 2025-11-01 14:06:44.370100 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:06:44.370111 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:06:44.370124 | orchestrator | 2025-11-01 14:06:44.370135 | orchestrator | 2025-11-01 14:06:44.370146 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:06:44.370157 | orchestrator | Saturday 01 November 2025 14:05:40 +0000 (0:00:00.533) 0:00:03.296 ***** 2025-11-01 14:06:44.370168 | orchestrator | =============================================================================== 2025-11-01 14:06:44.370179 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.45s 2025-11-01 14:06:44.370190 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.88s 2025-11-01 14:06:44.370201 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.53s 2025-11-01 14:06:44.370212 | orchestrator | 2025-11-01 14:06:44.370224 | orchestrator | 2025-11-01 14:06:44.370235 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-11-01 14:06:44.370246 | orchestrator | 2025-11-01 14:06:44.370257 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-11-01 14:06:44.370292 | orchestrator | Saturday 01 November 2025 14:05:36 +0000 (0:00:00.184) 0:00:00.185 ***** 2025-11-01 14:06:44.370303 | orchestrator | ok: [testbed-manager] 2025-11-01 14:06:44.370315 | orchestrator | 2025-11-01 14:06:44.370326 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-11-01 14:06:44.370336 | orchestrator | Saturday 01 November 2025 14:05:36 +0000 (0:00:00.692) 0:00:00.877 ***** 2025-11-01 14:06:44.370347 | orchestrator | ok: [testbed-manager] 2025-11-01 14:06:44.370358 | orchestrator | 2025-11-01 14:06:44.370369 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-11-01 14:06:44.370379 | orchestrator | Saturday 01 November 2025 14:05:37 +0000 (0:00:00.687) 0:00:01.564 ***** 2025-11-01 14:06:44.370390 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-11-01 14:06:44.370401 | orchestrator | 2025-11-01 14:06:44.370412 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-11-01 14:06:44.370422 | orchestrator | Saturday 01 November 2025 14:05:38 +0000 (0:00:00.793) 0:00:02.357 ***** 2025-11-01 14:06:44.370433 | orchestrator | changed: [testbed-manager] 2025-11-01 14:06:44.370443 | orchestrator | 2025-11-01 14:06:44.370481 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-11-01 14:06:44.370494 | orchestrator | Saturday 01 November 2025 14:05:40 +0000 (0:00:01.921) 0:00:04.279 ***** 2025-11-01 14:06:44.370506 | orchestrator | changed: [testbed-manager] 2025-11-01 14:06:44.370518 | orchestrator | 2025-11-01 14:06:44.370530 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-11-01 14:06:44.370543 | orchestrator | Saturday 01 November 2025 14:05:41 +0000 (0:00:00.746) 0:00:05.025 ***** 2025-11-01 14:06:44.370555 | orchestrator | changed: [testbed-manager -> localhost] 2025-11-01 14:06:44.370567 | orchestrator | 2025-11-01 14:06:44.370580 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-11-01 14:06:44.370592 | orchestrator | Saturday 01 November 2025 14:05:42 +0000 (0:00:01.688) 0:00:06.714 ***** 2025-11-01 14:06:44.370604 | orchestrator | changed: [testbed-manager -> localhost] 2025-11-01 14:06:44.370617 | orchestrator | 2025-11-01 14:06:44.370629 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-11-01 14:06:44.370642 | orchestrator | Saturday 01 November 2025 14:05:43 +0000 (0:00:01.178) 0:00:07.892 ***** 2025-11-01 14:06:44.370654 | orchestrator | ok: [testbed-manager] 2025-11-01 14:06:44.370666 | orchestrator | 2025-11-01 14:06:44.370678 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-11-01 14:06:44.370690 | orchestrator | Saturday 01 November 2025 14:05:44 +0000 (0:00:00.463) 0:00:08.356 ***** 2025-11-01 14:06:44.370702 | orchestrator | ok: [testbed-manager] 2025-11-01 14:06:44.370715 | orchestrator | 2025-11-01 14:06:44.370727 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:06:44.370740 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:06:44.370752 | orchestrator | 2025-11-01 14:06:44.370765 | orchestrator | 2025-11-01 14:06:44.370777 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:06:44.370803 | orchestrator | Saturday 01 November 2025 14:05:44 +0000 (0:00:00.338) 0:00:08.694 ***** 2025-11-01 14:06:44.370816 | orchestrator | =============================================================================== 2025-11-01 14:06:44.370828 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.92s 2025-11-01 14:06:44.370839 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.69s 2025-11-01 14:06:44.370850 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 1.18s 2025-11-01 14:06:44.370874 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.79s 2025-11-01 14:06:44.370885 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.75s 2025-11-01 14:06:44.370896 | orchestrator | Get home directory of operator user ------------------------------------- 0.69s 2025-11-01 14:06:44.370918 | orchestrator | Create .kube directory -------------------------------------------------- 0.69s 2025-11-01 14:06:44.370929 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.46s 2025-11-01 14:06:44.370940 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.34s 2025-11-01 14:06:44.370950 | orchestrator | 2025-11-01 14:06:44.370961 | orchestrator | 2025-11-01 14:06:44.370972 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-11-01 14:06:44.370982 | orchestrator | 2025-11-01 14:06:44.371087 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-11-01 14:06:44.371107 | orchestrator | Saturday 01 November 2025 14:04:08 +0000 (0:00:00.139) 0:00:00.139 ***** 2025-11-01 14:06:44.371118 | orchestrator | ok: [localhost] => { 2025-11-01 14:06:44.371130 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-11-01 14:06:44.371141 | orchestrator | } 2025-11-01 14:06:44.371152 | orchestrator | 2025-11-01 14:06:44.371163 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-11-01 14:06:44.371174 | orchestrator | Saturday 01 November 2025 14:04:08 +0000 (0:00:00.063) 0:00:00.203 ***** 2025-11-01 14:06:44.371186 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-11-01 14:06:44.371198 | orchestrator | ...ignoring 2025-11-01 14:06:44.371209 | orchestrator | 2025-11-01 14:06:44.371220 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-11-01 14:06:44.371230 | orchestrator | Saturday 01 November 2025 14:04:13 +0000 (0:00:04.919) 0:00:05.122 ***** 2025-11-01 14:06:44.371241 | orchestrator | skipping: [localhost] 2025-11-01 14:06:44.371252 | orchestrator | 2025-11-01 14:06:44.371263 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-11-01 14:06:44.371273 | orchestrator | Saturday 01 November 2025 14:04:13 +0000 (0:00:00.111) 0:00:05.234 ***** 2025-11-01 14:06:44.371284 | orchestrator | ok: [localhost] 2025-11-01 14:06:44.371366 | orchestrator | 2025-11-01 14:06:44.371379 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 14:06:44.371390 | orchestrator | 2025-11-01 14:06:44.371401 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 14:06:44.371412 | orchestrator | Saturday 01 November 2025 14:04:13 +0000 (0:00:00.540) 0:00:05.775 ***** 2025-11-01 14:06:44.371423 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:06:44.371434 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:06:44.371465 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:06:44.371477 | orchestrator | 2025-11-01 14:06:44.371488 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 14:06:44.371499 | orchestrator | Saturday 01 November 2025 14:04:14 +0000 (0:00:00.562) 0:00:06.338 ***** 2025-11-01 14:06:44.371509 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-11-01 14:06:44.371520 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-11-01 14:06:44.371531 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-11-01 14:06:44.371542 | orchestrator | 2025-11-01 14:06:44.371552 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-11-01 14:06:44.371563 | orchestrator | 2025-11-01 14:06:44.371574 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-11-01 14:06:44.371585 | orchestrator | Saturday 01 November 2025 14:04:15 +0000 (0:00:00.964) 0:00:07.302 ***** 2025-11-01 14:06:44.371596 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:06:44.371606 | orchestrator | 2025-11-01 14:06:44.371617 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-11-01 14:06:44.371628 | orchestrator | Saturday 01 November 2025 14:04:16 +0000 (0:00:01.058) 0:00:08.361 ***** 2025-11-01 14:06:44.371649 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:06:44.371659 | orchestrator | 2025-11-01 14:06:44.371670 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-11-01 14:06:44.371681 | orchestrator | Saturday 01 November 2025 14:04:17 +0000 (0:00:01.195) 0:00:09.556 ***** 2025-11-01 14:06:44.371692 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:06:44.371702 | orchestrator | 2025-11-01 14:06:44.371713 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-11-01 14:06:44.371724 | orchestrator | Saturday 01 November 2025 14:04:18 +0000 (0:00:01.199) 0:00:10.756 ***** 2025-11-01 14:06:44.371734 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:06:44.371745 | orchestrator | 2025-11-01 14:06:44.371756 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-11-01 14:06:44.371766 | orchestrator | Saturday 01 November 2025 14:04:19 +0000 (0:00:00.645) 0:00:11.401 ***** 2025-11-01 14:06:44.371777 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:06:44.371788 | orchestrator | 2025-11-01 14:06:44.371799 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-11-01 14:06:44.371809 | orchestrator | Saturday 01 November 2025 14:04:20 +0000 (0:00:00.604) 0:00:12.005 ***** 2025-11-01 14:06:44.371820 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:06:44.371831 | orchestrator | 2025-11-01 14:06:44.371849 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-11-01 14:06:44.371860 | orchestrator | Saturday 01 November 2025 14:04:23 +0000 (0:00:03.104) 0:00:15.110 ***** 2025-11-01 14:06:44.371871 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:06:44.371881 | orchestrator | 2025-11-01 14:06:44.371892 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-11-01 14:06:44.371911 | orchestrator | Saturday 01 November 2025 14:04:24 +0000 (0:00:01.589) 0:00:16.701 ***** 2025-11-01 14:06:44.371923 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:06:44.371933 | orchestrator | 2025-11-01 14:06:44.371944 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-11-01 14:06:44.371955 | orchestrator | Saturday 01 November 2025 14:04:25 +0000 (0:00:01.015) 0:00:17.716 ***** 2025-11-01 14:06:44.371966 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:06:44.371976 | orchestrator | 2025-11-01 14:06:44.371987 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-11-01 14:06:44.371999 | orchestrator | Saturday 01 November 2025 14:04:26 +0000 (0:00:01.018) 0:00:18.734 ***** 2025-11-01 14:06:44.372012 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:06:44.372024 | orchestrator | 2025-11-01 14:06:44.372037 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-11-01 14:06:44.372049 | orchestrator | Saturday 01 November 2025 14:04:27 +0000 (0:00:00.568) 0:00:19.303 ***** 2025-11-01 14:06:44.372067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-01 14:06:44.372086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-01 14:06:44.372115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-01 14:06:44.372129 | orchestrator | 2025-11-01 14:06:44.372141 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-11-01 14:06:44.372153 | orchestrator | Saturday 01 November 2025 14:04:29 +0000 (0:00:01.792) 0:00:21.096 ***** 2025-11-01 14:06:44.372174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-01 14:06:44.372190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-01 14:06:44.372212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-01 14:06:44.372225 | orchestrator | 2025-11-01 14:06:44.372238 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-11-01 14:06:44.372251 | orchestrator | Saturday 01 November 2025 14:04:32 +0000 (0:00:03.228) 0:00:24.325 ***** 2025-11-01 14:06:44.372263 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-11-01 14:06:44.372275 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-11-01 14:06:44.372287 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-11-01 14:06:44.372299 | orchestrator | 2025-11-01 14:06:44.372311 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-11-01 14:06:44.372323 | orchestrator | Saturday 01 November 2025 14:04:35 +0000 (0:00:03.061) 0:00:27.386 ***** 2025-11-01 14:06:44.372341 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-11-01 14:06:44.372354 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-11-01 14:06:44.372364 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-11-01 14:06:44.372375 | orchestrator | 2025-11-01 14:06:44.372386 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-11-01 14:06:44.372402 | orchestrator | Saturday 01 November 2025 14:04:39 +0000 (0:00:03.537) 0:00:30.923 ***** 2025-11-01 14:06:44.372413 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-11-01 14:06:44.372424 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-11-01 14:06:44.372434 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-11-01 14:06:44.372462 | orchestrator | 2025-11-01 14:06:44.372473 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-11-01 14:06:44.372484 | orchestrator | Saturday 01 November 2025 14:04:41 +0000 (0:00:02.056) 0:00:32.980 ***** 2025-11-01 14:06:44.372495 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-11-01 14:06:44.372506 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-11-01 14:06:44.372516 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-11-01 14:06:44.372527 | orchestrator | 2025-11-01 14:06:44.372538 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-11-01 14:06:44.372556 | orchestrator | Saturday 01 November 2025 14:04:43 +0000 (0:00:02.537) 0:00:35.518 ***** 2025-11-01 14:06:44.372566 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-11-01 14:06:44.372577 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-11-01 14:06:44.372588 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-11-01 14:06:44.372599 | orchestrator | 2025-11-01 14:06:44.372610 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-11-01 14:06:44.372620 | orchestrator | Saturday 01 November 2025 14:04:45 +0000 (0:00:02.159) 0:00:37.678 ***** 2025-11-01 14:06:44.372631 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-11-01 14:06:44.372642 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-11-01 14:06:44.372652 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-11-01 14:06:44.372663 | orchestrator | 2025-11-01 14:06:44.372674 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-11-01 14:06:44.372685 | orchestrator | Saturday 01 November 2025 14:04:47 +0000 (0:00:01.646) 0:00:39.324 ***** 2025-11-01 14:06:44.372695 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:06:44.372706 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:06:44.372717 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:06:44.372728 | orchestrator | 2025-11-01 14:06:44.372739 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-11-01 14:06:44.372749 | orchestrator | Saturday 01 November 2025 14:04:48 +0000 (0:00:00.609) 0:00:39.933 ***** 2025-11-01 14:06:44.372761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-01 14:06:44.372785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-01 14:06:44.372799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-01 14:06:44.372817 | orchestrator | 2025-11-01 14:06:44.372828 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-11-01 14:06:44.372839 | orchestrator | Saturday 01 November 2025 14:04:49 +0000 (0:00:01.833) 0:00:41.767 ***** 2025-11-01 14:06:44.372849 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:06:44.372860 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:06:44.372871 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:06:44.372881 | orchestrator | 2025-11-01 14:06:44.372892 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-11-01 14:06:44.372903 | orchestrator | Saturday 01 November 2025 14:04:51 +0000 (0:00:01.669) 0:00:43.436 ***** 2025-11-01 14:06:44.372913 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:06:44.372924 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:06:44.372935 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:06:44.372946 | orchestrator | 2025-11-01 14:06:44.372956 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-11-01 14:06:44.372967 | orchestrator | Saturday 01 November 2025 14:04:59 +0000 (0:00:07.913) 0:00:51.350 ***** 2025-11-01 14:06:44.372978 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:06:44.372988 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:06:44.372999 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:06:44.373010 | orchestrator | 2025-11-01 14:06:44.373020 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-11-01 14:06:44.373031 | orchestrator | 2025-11-01 14:06:44.373042 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-11-01 14:06:44.373053 | orchestrator | Saturday 01 November 2025 14:05:00 +0000 (0:00:00.493) 0:00:51.843 ***** 2025-11-01 14:06:44.373063 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:06:44.373074 | orchestrator | 2025-11-01 14:06:44.373085 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-11-01 14:06:44.373096 | orchestrator | Saturday 01 November 2025 14:05:00 +0000 (0:00:00.834) 0:00:52.678 ***** 2025-11-01 14:06:44.373106 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:06:44.373117 | orchestrator | 2025-11-01 14:06:44.373128 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-11-01 14:06:44.373138 | orchestrator | Saturday 01 November 2025 14:05:01 +0000 (0:00:00.325) 0:00:53.003 ***** 2025-11-01 14:06:44.373149 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:06:44.373160 | orchestrator | 2025-11-01 14:06:44.373171 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-11-01 14:06:44.373181 | orchestrator | Saturday 01 November 2025 14:05:03 +0000 (0:00:01.878) 0:00:54.881 ***** 2025-11-01 14:06:44.373192 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:06:44.373203 | orchestrator | 2025-11-01 14:06:44.373213 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-11-01 14:06:44.373224 | orchestrator | 2025-11-01 14:06:44.373234 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-11-01 14:06:44.373245 | orchestrator | Saturday 01 November 2025 14:06:00 +0000 (0:00:57.399) 0:01:52.280 ***** 2025-11-01 14:06:44.373262 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:06:44.373273 | orchestrator | 2025-11-01 14:06:44.373283 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-11-01 14:06:44.373294 | orchestrator | Saturday 01 November 2025 14:06:01 +0000 (0:00:00.638) 0:01:52.919 ***** 2025-11-01 14:06:44.373305 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:06:44.373315 | orchestrator | 2025-11-01 14:06:44.373326 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-11-01 14:06:44.373337 | orchestrator | Saturday 01 November 2025 14:06:01 +0000 (0:00:00.260) 0:01:53.179 ***** 2025-11-01 14:06:44.373348 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:06:44.373359 | orchestrator | 2025-11-01 14:06:44.373369 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-11-01 14:06:44.373380 | orchestrator | Saturday 01 November 2025 14:06:03 +0000 (0:00:01.817) 0:01:54.996 ***** 2025-11-01 14:06:44.373395 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:06:44.373406 | orchestrator | 2025-11-01 14:06:44.373417 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-11-01 14:06:44.373428 | orchestrator | 2025-11-01 14:06:44.373438 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-11-01 14:06:44.373467 | orchestrator | Saturday 01 November 2025 14:06:20 +0000 (0:00:16.871) 0:02:11.868 ***** 2025-11-01 14:06:44.373479 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:06:44.373489 | orchestrator | 2025-11-01 14:06:44.373506 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-11-01 14:06:44.373517 | orchestrator | Saturday 01 November 2025 14:06:20 +0000 (0:00:00.711) 0:02:12.580 ***** 2025-11-01 14:06:44.373528 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:06:44.373538 | orchestrator | 2025-11-01 14:06:44.373549 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-11-01 14:06:44.373560 | orchestrator | Saturday 01 November 2025 14:06:21 +0000 (0:00:00.366) 0:02:12.946 ***** 2025-11-01 14:06:44.373570 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:06:44.373581 | orchestrator | 2025-11-01 14:06:44.373592 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-11-01 14:06:44.373603 | orchestrator | Saturday 01 November 2025 14:06:23 +0000 (0:00:02.260) 0:02:15.207 ***** 2025-11-01 14:06:44.373613 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:06:44.373624 | orchestrator | 2025-11-01 14:06:44.373635 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-11-01 14:06:44.373645 | orchestrator | 2025-11-01 14:06:44.373656 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-11-01 14:06:44.373667 | orchestrator | Saturday 01 November 2025 14:06:39 +0000 (0:00:15.653) 0:02:30.860 ***** 2025-11-01 14:06:44.373678 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:06:44.373688 | orchestrator | 2025-11-01 14:06:44.373699 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-11-01 14:06:44.373709 | orchestrator | Saturday 01 November 2025 14:06:39 +0000 (0:00:00.523) 0:02:31.383 ***** 2025-11-01 14:06:44.373720 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-11-01 14:06:44.373731 | orchestrator | enable_outward_rabbitmq_True 2025-11-01 14:06:44.373741 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-11-01 14:06:44.373752 | orchestrator | outward_rabbitmq_restart 2025-11-01 14:06:44.373763 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:06:44.373773 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:06:44.373784 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:06:44.373795 | orchestrator | 2025-11-01 14:06:44.373806 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-11-01 14:06:44.373816 | orchestrator | skipping: no hosts matched 2025-11-01 14:06:44.373827 | orchestrator | 2025-11-01 14:06:44.373838 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-11-01 14:06:44.373862 | orchestrator | skipping: no hosts matched 2025-11-01 14:06:44.373872 | orchestrator | 2025-11-01 14:06:44.373883 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-11-01 14:06:44.373894 | orchestrator | skipping: no hosts matched 2025-11-01 14:06:44.373905 | orchestrator | 2025-11-01 14:06:44.373915 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:06:44.373926 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-11-01 14:06:44.373937 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-11-01 14:06:44.373948 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 14:06:44.373959 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 14:06:44.373970 | orchestrator | 2025-11-01 14:06:44.373981 | orchestrator | 2025-11-01 14:06:44.373991 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:06:44.374002 | orchestrator | Saturday 01 November 2025 14:06:42 +0000 (0:00:03.216) 0:02:34.599 ***** 2025-11-01 14:06:44.374013 | orchestrator | =============================================================================== 2025-11-01 14:06:44.374053 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 89.92s 2025-11-01 14:06:44.374064 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.91s 2025-11-01 14:06:44.374075 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.96s 2025-11-01 14:06:44.374085 | orchestrator | Check RabbitMQ service -------------------------------------------------- 4.92s 2025-11-01 14:06:44.374096 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.54s 2025-11-01 14:06:44.374107 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.23s 2025-11-01 14:06:44.374118 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.22s 2025-11-01 14:06:44.374129 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 3.10s 2025-11-01 14:06:44.374139 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 3.06s 2025-11-01 14:06:44.374150 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.54s 2025-11-01 14:06:44.374160 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.19s 2025-11-01 14:06:44.374171 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.16s 2025-11-01 14:06:44.374182 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.06s 2025-11-01 14:06:44.374198 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.83s 2025-11-01 14:06:44.374209 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.79s 2025-11-01 14:06:44.374220 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.67s 2025-11-01 14:06:44.374231 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.65s 2025-11-01 14:06:44.374248 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.59s 2025-11-01 14:06:44.374259 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 1.20s 2025-11-01 14:06:44.374269 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.20s 2025-11-01 14:06:44.374280 | orchestrator | 2025-11-01 14:06:44 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:06:44.374291 | orchestrator | 2025-11-01 14:06:44 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:06:44.374302 | orchestrator | 2025-11-01 14:06:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:06:47.416825 | orchestrator | 2025-11-01 14:06:47 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:06:47.418128 | orchestrator | 2025-11-01 14:06:47 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:06:47.418492 | orchestrator | 2025-11-01 14:06:47 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:06:47.418516 | orchestrator | 2025-11-01 14:06:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:06:50.469128 | orchestrator | 2025-11-01 14:06:50 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:06:50.472691 | orchestrator | 2025-11-01 14:06:50 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:06:50.474623 | orchestrator | 2025-11-01 14:06:50 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:06:50.474656 | orchestrator | 2025-11-01 14:06:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:06:53.508545 | orchestrator | 2025-11-01 14:06:53 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:06:53.508790 | orchestrator | 2025-11-01 14:06:53 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:06:53.509720 | orchestrator | 2025-11-01 14:06:53 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:06:53.509751 | orchestrator | 2025-11-01 14:06:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:06:56.569047 | orchestrator | 2025-11-01 14:06:56 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:06:56.570993 | orchestrator | 2025-11-01 14:06:56 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:06:56.572046 | orchestrator | 2025-11-01 14:06:56 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:06:56.572080 | orchestrator | 2025-11-01 14:06:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:06:59.617189 | orchestrator | 2025-11-01 14:06:59 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:06:59.620158 | orchestrator | 2025-11-01 14:06:59 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:06:59.622240 | orchestrator | 2025-11-01 14:06:59 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:06:59.622267 | orchestrator | 2025-11-01 14:06:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:07:02.659232 | orchestrator | 2025-11-01 14:07:02 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:07:02.661662 | orchestrator | 2025-11-01 14:07:02 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:07:02.662275 | orchestrator | 2025-11-01 14:07:02 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:07:02.662302 | orchestrator | 2025-11-01 14:07:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:07:05.697271 | orchestrator | 2025-11-01 14:07:05 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:07:05.705304 | orchestrator | 2025-11-01 14:07:05 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:07:05.708236 | orchestrator | 2025-11-01 14:07:05 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:07:05.708261 | orchestrator | 2025-11-01 14:07:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:07:08.745022 | orchestrator | 2025-11-01 14:07:08 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:07:08.745120 | orchestrator | 2025-11-01 14:07:08 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:07:08.746784 | orchestrator | 2025-11-01 14:07:08 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:07:08.746813 | orchestrator | 2025-11-01 14:07:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:07:11.791776 | orchestrator | 2025-11-01 14:07:11 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:07:11.794484 | orchestrator | 2025-11-01 14:07:11 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:07:11.797709 | orchestrator | 2025-11-01 14:07:11 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:07:11.797931 | orchestrator | 2025-11-01 14:07:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:07:14.845725 | orchestrator | 2025-11-01 14:07:14 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:07:14.847908 | orchestrator | 2025-11-01 14:07:14 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:07:14.849251 | orchestrator | 2025-11-01 14:07:14 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:07:14.849275 | orchestrator | 2025-11-01 14:07:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:07:17.883927 | orchestrator | 2025-11-01 14:07:17 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:07:17.885774 | orchestrator | 2025-11-01 14:07:17 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:07:17.886713 | orchestrator | 2025-11-01 14:07:17 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:07:17.886742 | orchestrator | 2025-11-01 14:07:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:07:20.928417 | orchestrator | 2025-11-01 14:07:20 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:07:20.929338 | orchestrator | 2025-11-01 14:07:20 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:07:20.930352 | orchestrator | 2025-11-01 14:07:20 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:07:20.930380 | orchestrator | 2025-11-01 14:07:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:07:23.987160 | orchestrator | 2025-11-01 14:07:23 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:07:23.988431 | orchestrator | 2025-11-01 14:07:23 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:07:23.990102 | orchestrator | 2025-11-01 14:07:23 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:07:23.990364 | orchestrator | 2025-11-01 14:07:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:07:27.035806 | orchestrator | 2025-11-01 14:07:27 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:07:27.037879 | orchestrator | 2025-11-01 14:07:27 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:07:27.040079 | orchestrator | 2025-11-01 14:07:27 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:07:27.040102 | orchestrator | 2025-11-01 14:07:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:07:30.088245 | orchestrator | 2025-11-01 14:07:30 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:07:30.088905 | orchestrator | 2025-11-01 14:07:30 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:07:30.089958 | orchestrator | 2025-11-01 14:07:30 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:07:30.090239 | orchestrator | 2025-11-01 14:07:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:07:33.148679 | orchestrator | 2025-11-01 14:07:33 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:07:33.149034 | orchestrator | 2025-11-01 14:07:33 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:07:33.151292 | orchestrator | 2025-11-01 14:07:33 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:07:33.151315 | orchestrator | 2025-11-01 14:07:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:07:36.190637 | orchestrator | 2025-11-01 14:07:36 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:07:36.190847 | orchestrator | 2025-11-01 14:07:36 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:07:36.191848 | orchestrator | 2025-11-01 14:07:36 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:07:36.192066 | orchestrator | 2025-11-01 14:07:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:07:39.233776 | orchestrator | 2025-11-01 14:07:39 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:07:39.234434 | orchestrator | 2025-11-01 14:07:39 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:07:39.236449 | orchestrator | 2025-11-01 14:07:39 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:07:39.236613 | orchestrator | 2025-11-01 14:07:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:07:42.281114 | orchestrator | 2025-11-01 14:07:42 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state STARTED 2025-11-01 14:07:42.282423 | orchestrator | 2025-11-01 14:07:42 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:07:42.283439 | orchestrator | 2025-11-01 14:07:42 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:07:42.283621 | orchestrator | 2025-11-01 14:07:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:07:45.331710 | orchestrator | 2025-11-01 14:07:45 | INFO  | Task d00e0b2f-2e34-46a2-ac4e-16602d98a0d9 is in state SUCCESS 2025-11-01 14:07:45.334983 | orchestrator | 2025-11-01 14:07:45.335011 | orchestrator | 2025-11-01 14:07:45.335018 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 14:07:45.335024 | orchestrator | 2025-11-01 14:07:45.335029 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 14:07:45.335034 | orchestrator | Saturday 01 November 2025 14:05:04 +0000 (0:00:00.388) 0:00:00.388 ***** 2025-11-01 14:07:45.335039 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:07:45.335046 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:07:45.335051 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:07:45.335055 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:07:45.335060 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:07:45.335065 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:07:45.335069 | orchestrator | 2025-11-01 14:07:45.335074 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 14:07:45.335079 | orchestrator | Saturday 01 November 2025 14:05:05 +0000 (0:00:00.842) 0:00:01.230 ***** 2025-11-01 14:07:45.335084 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-11-01 14:07:45.335089 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-11-01 14:07:45.335108 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-11-01 14:07:45.335114 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-11-01 14:07:45.335119 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-11-01 14:07:45.335123 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-11-01 14:07:45.335128 | orchestrator | 2025-11-01 14:07:45.335133 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-11-01 14:07:45.335138 | orchestrator | 2025-11-01 14:07:45.335142 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-11-01 14:07:45.335147 | orchestrator | Saturday 01 November 2025 14:05:06 +0000 (0:00:01.272) 0:00:02.502 ***** 2025-11-01 14:07:45.335153 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:07:45.335159 | orchestrator | 2025-11-01 14:07:45.335164 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-11-01 14:07:45.335169 | orchestrator | Saturday 01 November 2025 14:05:07 +0000 (0:00:01.346) 0:00:03.849 ***** 2025-11-01 14:07:45.335176 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.335183 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.335188 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.335193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.335198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.335251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.335266 | orchestrator | 2025-11-01 14:07:45.335279 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-11-01 14:07:45.335289 | orchestrator | Saturday 01 November 2025 14:05:09 +0000 (0:00:01.480) 0:00:05.330 ***** 2025-11-01 14:07:45.335294 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.335299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.335304 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.335309 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.335314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.335322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.335327 | orchestrator | 2025-11-01 14:07:45.335331 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-11-01 14:07:45.335336 | orchestrator | Saturday 01 November 2025 14:05:11 +0000 (0:00:02.411) 0:00:07.742 ***** 2025-11-01 14:07:45.335341 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.335346 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.335354 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.335365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.335370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.335374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.335379 | orchestrator | 2025-11-01 14:07:45.335384 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-11-01 14:07:45.335389 | orchestrator | Saturday 01 November 2025 14:05:13 +0000 (0:00:01.572) 0:00:09.314 ***** 2025-11-01 14:07:45.335394 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.335398 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.335406 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.335411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.335416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.335424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.335429 | orchestrator | 2025-11-01 14:07:45.335436 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-11-01 14:07:45.335441 | orchestrator | Saturday 01 November 2025 14:05:15 +0000 (0:00:01.998) 0:00:11.313 ***** 2025-11-01 14:07:45.335446 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.335450 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.335455 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.335476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.335481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.335489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.335494 | orchestrator | 2025-11-01 14:07:45.335498 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-11-01 14:07:45.335503 | orchestrator | Saturday 01 November 2025 14:05:17 +0000 (0:00:02.195) 0:00:13.508 ***** 2025-11-01 14:07:45.335508 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:07:45.335513 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:07:45.335518 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:07:45.335522 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:07:45.335531 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:07:45.335535 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:07:45.335540 | orchestrator | 2025-11-01 14:07:45.335545 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-11-01 14:07:45.335550 | orchestrator | Saturday 01 November 2025 14:05:20 +0000 (0:00:02.713) 0:00:16.222 ***** 2025-11-01 14:07:45.335554 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-11-01 14:07:45.335560 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-11-01 14:07:45.335564 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-11-01 14:07:45.335569 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-11-01 14:07:45.335574 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-11-01 14:07:45.335804 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-11-01 14:07:45.335814 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-11-01 14:07:45.335820 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-11-01 14:07:45.335829 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-11-01 14:07:45.335835 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-11-01 14:07:45.335840 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-11-01 14:07:45.335845 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-11-01 14:07:45.335851 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-11-01 14:07:45.335857 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-11-01 14:07:45.335863 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-11-01 14:07:45.335868 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-11-01 14:07:45.335874 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-11-01 14:07:45.335879 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-11-01 14:07:45.335884 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-11-01 14:07:45.335891 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-11-01 14:07:45.335896 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-11-01 14:07:45.335902 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-11-01 14:07:45.335907 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-11-01 14:07:45.335912 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-11-01 14:07:45.335918 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-11-01 14:07:45.335923 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-11-01 14:07:45.335929 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-11-01 14:07:45.335939 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-11-01 14:07:45.335945 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-11-01 14:07:45.335950 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-11-01 14:07:45.335955 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-11-01 14:07:45.335963 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-11-01 14:07:45.335968 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-11-01 14:07:45.335972 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-11-01 14:07:45.335977 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-11-01 14:07:45.335982 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-11-01 14:07:45.335986 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-11-01 14:07:45.335991 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-11-01 14:07:45.335996 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-11-01 14:07:45.336001 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-11-01 14:07:45.336005 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-11-01 14:07:45.336010 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-11-01 14:07:45.336015 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-11-01 14:07:45.336020 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-11-01 14:07:45.336028 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-11-01 14:07:45.336033 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-11-01 14:07:45.336037 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-11-01 14:07:45.336042 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-11-01 14:07:45.336047 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-11-01 14:07:45.336052 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-11-01 14:07:45.336056 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-11-01 14:07:45.336061 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-11-01 14:07:45.336066 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-11-01 14:07:45.336070 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-11-01 14:07:45.336075 | orchestrator | 2025-11-01 14:07:45.336080 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-11-01 14:07:45.336090 | orchestrator | Saturday 01 November 2025 14:05:41 +0000 (0:00:21.026) 0:00:37.248 ***** 2025-11-01 14:07:45.336094 | orchestrator | 2025-11-01 14:07:45.336099 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-11-01 14:07:45.336104 | orchestrator | Saturday 01 November 2025 14:05:41 +0000 (0:00:00.091) 0:00:37.340 ***** 2025-11-01 14:07:45.336108 | orchestrator | 2025-11-01 14:07:45.336113 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-11-01 14:07:45.336118 | orchestrator | Saturday 01 November 2025 14:05:41 +0000 (0:00:00.078) 0:00:37.419 ***** 2025-11-01 14:07:45.336123 | orchestrator | 2025-11-01 14:07:45.336127 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-11-01 14:07:45.336132 | orchestrator | Saturday 01 November 2025 14:05:41 +0000 (0:00:00.080) 0:00:37.499 ***** 2025-11-01 14:07:45.336137 | orchestrator | 2025-11-01 14:07:45.336141 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-11-01 14:07:45.336146 | orchestrator | Saturday 01 November 2025 14:05:41 +0000 (0:00:00.096) 0:00:37.595 ***** 2025-11-01 14:07:45.336151 | orchestrator | 2025-11-01 14:07:45.336155 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-11-01 14:07:45.336160 | orchestrator | Saturday 01 November 2025 14:05:41 +0000 (0:00:00.069) 0:00:37.665 ***** 2025-11-01 14:07:45.336165 | orchestrator | 2025-11-01 14:07:45.336169 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-11-01 14:07:45.336174 | orchestrator | Saturday 01 November 2025 14:05:41 +0000 (0:00:00.069) 0:00:37.734 ***** 2025-11-01 14:07:45.336179 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:07:45.336184 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:07:45.336188 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:07:45.336193 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:07:45.336198 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:07:45.336202 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:07:45.336207 | orchestrator | 2025-11-01 14:07:45.336214 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-11-01 14:07:45.336219 | orchestrator | Saturday 01 November 2025 14:05:44 +0000 (0:00:02.869) 0:00:40.604 ***** 2025-11-01 14:07:45.336224 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:07:45.336229 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:07:45.336233 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:07:45.336238 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:07:45.336242 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:07:45.336247 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:07:45.336252 | orchestrator | 2025-11-01 14:07:45.336256 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-11-01 14:07:45.336261 | orchestrator | 2025-11-01 14:07:45.336266 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-11-01 14:07:45.336270 | orchestrator | Saturday 01 November 2025 14:06:18 +0000 (0:00:33.441) 0:01:14.045 ***** 2025-11-01 14:07:45.336275 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:07:45.336280 | orchestrator | 2025-11-01 14:07:45.336284 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-11-01 14:07:45.336289 | orchestrator | Saturday 01 November 2025 14:06:18 +0000 (0:00:00.784) 0:01:14.830 ***** 2025-11-01 14:07:45.336294 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:07:45.336299 | orchestrator | 2025-11-01 14:07:45.336303 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-11-01 14:07:45.336308 | orchestrator | Saturday 01 November 2025 14:06:19 +0000 (0:00:00.599) 0:01:15.429 ***** 2025-11-01 14:07:45.336313 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:07:45.336318 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:07:45.336322 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:07:45.336327 | orchestrator | 2025-11-01 14:07:45.336335 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-11-01 14:07:45.336340 | orchestrator | Saturday 01 November 2025 14:06:20 +0000 (0:00:01.341) 0:01:16.771 ***** 2025-11-01 14:07:45.336345 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:07:45.336349 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:07:45.336354 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:07:45.336361 | orchestrator | 2025-11-01 14:07:45.336366 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-11-01 14:07:45.336371 | orchestrator | Saturday 01 November 2025 14:06:21 +0000 (0:00:00.450) 0:01:17.222 ***** 2025-11-01 14:07:45.336375 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:07:45.336380 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:07:45.336385 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:07:45.336389 | orchestrator | 2025-11-01 14:07:45.336394 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-11-01 14:07:45.336399 | orchestrator | Saturday 01 November 2025 14:06:21 +0000 (0:00:00.359) 0:01:17.582 ***** 2025-11-01 14:07:45.336403 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:07:45.336408 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:07:45.336413 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:07:45.336417 | orchestrator | 2025-11-01 14:07:45.336422 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-11-01 14:07:45.336427 | orchestrator | Saturday 01 November 2025 14:06:21 +0000 (0:00:00.446) 0:01:18.028 ***** 2025-11-01 14:07:45.336432 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:07:45.336436 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:07:45.336441 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:07:45.336445 | orchestrator | 2025-11-01 14:07:45.336450 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-11-01 14:07:45.336455 | orchestrator | Saturday 01 November 2025 14:06:22 +0000 (0:00:00.956) 0:01:18.984 ***** 2025-11-01 14:07:45.336484 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:07:45.336489 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:07:45.336494 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:07:45.336499 | orchestrator | 2025-11-01 14:07:45.336503 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-11-01 14:07:45.336508 | orchestrator | Saturday 01 November 2025 14:06:23 +0000 (0:00:00.397) 0:01:19.382 ***** 2025-11-01 14:07:45.336513 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:07:45.336518 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:07:45.336522 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:07:45.336527 | orchestrator | 2025-11-01 14:07:45.336532 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-11-01 14:07:45.336536 | orchestrator | Saturday 01 November 2025 14:06:23 +0000 (0:00:00.358) 0:01:19.741 ***** 2025-11-01 14:07:45.336541 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:07:45.336546 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:07:45.336551 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:07:45.336555 | orchestrator | 2025-11-01 14:07:45.336560 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-11-01 14:07:45.336565 | orchestrator | Saturday 01 November 2025 14:06:24 +0000 (0:00:00.380) 0:01:20.121 ***** 2025-11-01 14:07:45.336569 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:07:45.336574 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:07:45.336579 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:07:45.336584 | orchestrator | 2025-11-01 14:07:45.336588 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-11-01 14:07:45.336593 | orchestrator | Saturday 01 November 2025 14:06:24 +0000 (0:00:00.512) 0:01:20.634 ***** 2025-11-01 14:07:45.336598 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:07:45.336603 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:07:45.336607 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:07:45.336612 | orchestrator | 2025-11-01 14:07:45.336617 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-11-01 14:07:45.336625 | orchestrator | Saturday 01 November 2025 14:06:24 +0000 (0:00:00.322) 0:01:20.956 ***** 2025-11-01 14:07:45.336630 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:07:45.336635 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:07:45.336639 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:07:45.336644 | orchestrator | 2025-11-01 14:07:45.336649 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-11-01 14:07:45.336653 | orchestrator | Saturday 01 November 2025 14:06:25 +0000 (0:00:00.297) 0:01:21.253 ***** 2025-11-01 14:07:45.336664 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:07:45.336669 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:07:45.336673 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:07:45.336678 | orchestrator | 2025-11-01 14:07:45.336683 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-11-01 14:07:45.336687 | orchestrator | Saturday 01 November 2025 14:06:25 +0000 (0:00:00.335) 0:01:21.589 ***** 2025-11-01 14:07:45.336692 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:07:45.336697 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:07:45.336701 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:07:45.336706 | orchestrator | 2025-11-01 14:07:45.336711 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-11-01 14:07:45.336715 | orchestrator | Saturday 01 November 2025 14:06:26 +0000 (0:00:00.556) 0:01:22.145 ***** 2025-11-01 14:07:45.336720 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:07:45.336725 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:07:45.336729 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:07:45.336734 | orchestrator | 2025-11-01 14:07:45.336739 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-11-01 14:07:45.336744 | orchestrator | Saturday 01 November 2025 14:06:26 +0000 (0:00:00.320) 0:01:22.465 ***** 2025-11-01 14:07:45.336748 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:07:45.336753 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:07:45.336758 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:07:45.336763 | orchestrator | 2025-11-01 14:07:45.336767 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-11-01 14:07:45.336772 | orchestrator | Saturday 01 November 2025 14:06:26 +0000 (0:00:00.344) 0:01:22.809 ***** 2025-11-01 14:07:45.336777 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:07:45.336781 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:07:45.336786 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:07:45.336791 | orchestrator | 2025-11-01 14:07:45.336795 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-11-01 14:07:45.336800 | orchestrator | Saturday 01 November 2025 14:06:27 +0000 (0:00:00.327) 0:01:23.137 ***** 2025-11-01 14:07:45.336805 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:07:45.336810 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:07:45.336817 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:07:45.336822 | orchestrator | 2025-11-01 14:07:45.336827 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-11-01 14:07:45.336832 | orchestrator | Saturday 01 November 2025 14:06:27 +0000 (0:00:00.306) 0:01:23.444 ***** 2025-11-01 14:07:45.336836 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:07:45.336841 | orchestrator | 2025-11-01 14:07:45.336846 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-11-01 14:07:45.336851 | orchestrator | Saturday 01 November 2025 14:06:28 +0000 (0:00:00.816) 0:01:24.260 ***** 2025-11-01 14:07:45.336855 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:07:45.336860 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:07:45.336865 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:07:45.336870 | orchestrator | 2025-11-01 14:07:45.336874 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-11-01 14:07:45.336879 | orchestrator | Saturday 01 November 2025 14:06:28 +0000 (0:00:00.464) 0:01:24.724 ***** 2025-11-01 14:07:45.336888 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:07:45.336893 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:07:45.336897 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:07:45.336902 | orchestrator | 2025-11-01 14:07:45.336907 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-11-01 14:07:45.336911 | orchestrator | Saturday 01 November 2025 14:06:29 +0000 (0:00:00.447) 0:01:25.172 ***** 2025-11-01 14:07:45.336916 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:07:45.336921 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:07:45.336926 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:07:45.336930 | orchestrator | 2025-11-01 14:07:45.336935 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-11-01 14:07:45.336940 | orchestrator | Saturday 01 November 2025 14:06:29 +0000 (0:00:00.581) 0:01:25.753 ***** 2025-11-01 14:07:45.336944 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:07:45.336949 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:07:45.336954 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:07:45.336958 | orchestrator | 2025-11-01 14:07:45.336963 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-11-01 14:07:45.336968 | orchestrator | Saturday 01 November 2025 14:06:30 +0000 (0:00:00.366) 0:01:26.120 ***** 2025-11-01 14:07:45.336972 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:07:45.336977 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:07:45.336982 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:07:45.336986 | orchestrator | 2025-11-01 14:07:45.336991 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-11-01 14:07:45.336996 | orchestrator | Saturday 01 November 2025 14:06:30 +0000 (0:00:00.488) 0:01:26.608 ***** 2025-11-01 14:07:45.337001 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:07:45.337005 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:07:45.337010 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:07:45.337015 | orchestrator | 2025-11-01 14:07:45.337019 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-11-01 14:07:45.337024 | orchestrator | Saturday 01 November 2025 14:06:30 +0000 (0:00:00.419) 0:01:27.028 ***** 2025-11-01 14:07:45.337029 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:07:45.337033 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:07:45.337038 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:07:45.337043 | orchestrator | 2025-11-01 14:07:45.337048 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-11-01 14:07:45.337052 | orchestrator | Saturday 01 November 2025 14:06:31 +0000 (0:00:00.682) 0:01:27.711 ***** 2025-11-01 14:07:45.337057 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:07:45.337062 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:07:45.337066 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:07:45.337071 | orchestrator | 2025-11-01 14:07:45.337076 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-11-01 14:07:45.337083 | orchestrator | Saturday 01 November 2025 14:06:32 +0000 (0:00:00.391) 0:01:28.102 ***** 2025-11-01 14:07:45.337089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337148 | orchestrator | 2025-11-01 14:07:45.337152 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-11-01 14:07:45.337157 | orchestrator | Saturday 01 November 2025 14:06:33 +0000 (0:00:01.514) 0:01:29.616 ***** 2025-11-01 14:07:45.337162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337215 | orchestrator | 2025-11-01 14:07:45.337220 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-11-01 14:07:45.337225 | orchestrator | Saturday 01 November 2025 14:06:37 +0000 (0:00:04.107) 0:01:33.724 ***** 2025-11-01 14:07:45.337230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337283 | orchestrator | 2025-11-01 14:07:45.337288 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-11-01 14:07:45.337293 | orchestrator | Saturday 01 November 2025 14:06:40 +0000 (0:00:02.542) 0:01:36.267 ***** 2025-11-01 14:07:45.337298 | orchestrator | 2025-11-01 14:07:45.337303 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-11-01 14:07:45.337307 | orchestrator | Saturday 01 November 2025 14:06:40 +0000 (0:00:00.073) 0:01:36.340 ***** 2025-11-01 14:07:45.337312 | orchestrator | 2025-11-01 14:07:45.337317 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-11-01 14:07:45.337321 | orchestrator | Saturday 01 November 2025 14:06:40 +0000 (0:00:00.070) 0:01:36.410 ***** 2025-11-01 14:07:45.337326 | orchestrator | 2025-11-01 14:07:45.337331 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-11-01 14:07:45.337335 | orchestrator | Saturday 01 November 2025 14:06:40 +0000 (0:00:00.084) 0:01:36.494 ***** 2025-11-01 14:07:45.337340 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:07:45.337345 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:07:45.337350 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:07:45.337354 | orchestrator | 2025-11-01 14:07:45.337359 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-11-01 14:07:45.337367 | orchestrator | Saturday 01 November 2025 14:06:48 +0000 (0:00:07.623) 0:01:44.118 ***** 2025-11-01 14:07:45.337372 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:07:45.337376 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:07:45.337381 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:07:45.337386 | orchestrator | 2025-11-01 14:07:45.337390 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-11-01 14:07:45.337395 | orchestrator | Saturday 01 November 2025 14:06:55 +0000 (0:00:07.555) 0:01:51.673 ***** 2025-11-01 14:07:45.337400 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:07:45.337405 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:07:45.337409 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:07:45.337414 | orchestrator | 2025-11-01 14:07:45.337421 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-11-01 14:07:45.337426 | orchestrator | Saturday 01 November 2025 14:07:03 +0000 (0:00:07.692) 0:01:59.366 ***** 2025-11-01 14:07:45.337431 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:07:45.337436 | orchestrator | 2025-11-01 14:07:45.337440 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-11-01 14:07:45.337445 | orchestrator | Saturday 01 November 2025 14:07:03 +0000 (0:00:00.607) 0:01:59.974 ***** 2025-11-01 14:07:45.337450 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:07:45.337455 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:07:45.337471 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:07:45.337475 | orchestrator | 2025-11-01 14:07:45.337480 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-11-01 14:07:45.337485 | orchestrator | Saturday 01 November 2025 14:07:05 +0000 (0:00:01.175) 0:02:01.149 ***** 2025-11-01 14:07:45.337490 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:07:45.337494 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:07:45.337499 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:07:45.337504 | orchestrator | 2025-11-01 14:07:45.337508 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-11-01 14:07:45.337513 | orchestrator | Saturday 01 November 2025 14:07:05 +0000 (0:00:00.717) 0:02:01.866 ***** 2025-11-01 14:07:45.337518 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:07:45.337522 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:07:45.337527 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:07:45.337532 | orchestrator | 2025-11-01 14:07:45.337537 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-11-01 14:07:45.337541 | orchestrator | Saturday 01 November 2025 14:07:06 +0000 (0:00:00.825) 0:02:02.691 ***** 2025-11-01 14:07:45.337546 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:07:45.337551 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:07:45.337555 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:07:45.337560 | orchestrator | 2025-11-01 14:07:45.337565 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-11-01 14:07:45.337570 | orchestrator | Saturday 01 November 2025 14:07:07 +0000 (0:00:00.684) 0:02:03.376 ***** 2025-11-01 14:07:45.337574 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:07:45.337579 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:07:45.337586 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:07:45.337591 | orchestrator | 2025-11-01 14:07:45.337596 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-11-01 14:07:45.337600 | orchestrator | Saturday 01 November 2025 14:07:08 +0000 (0:00:01.277) 0:02:04.654 ***** 2025-11-01 14:07:45.337605 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:07:45.337610 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:07:45.337614 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:07:45.337619 | orchestrator | 2025-11-01 14:07:45.337624 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-11-01 14:07:45.337629 | orchestrator | Saturday 01 November 2025 14:07:09 +0000 (0:00:00.801) 0:02:05.455 ***** 2025-11-01 14:07:45.337633 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:07:45.337638 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:07:45.337643 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:07:45.337651 | orchestrator | 2025-11-01 14:07:45.337656 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-11-01 14:07:45.337661 | orchestrator | Saturday 01 November 2025 14:07:09 +0000 (0:00:00.323) 0:02:05.779 ***** 2025-11-01 14:07:45.337665 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337670 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337675 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337680 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337686 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337693 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337698 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337703 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337712 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337718 | orchestrator | 2025-11-01 14:07:45.337722 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-11-01 14:07:45.337730 | orchestrator | Saturday 01 November 2025 14:07:11 +0000 (0:00:01.529) 0:02:07.309 ***** 2025-11-01 14:07:45.337735 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337740 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337745 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337750 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337760 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337782 | orchestrator | 2025-11-01 14:07:45.337787 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-11-01 14:07:45.337795 | orchestrator | Saturday 01 November 2025 14:07:16 +0000 (0:00:05.111) 0:02:12.420 ***** 2025-11-01 14:07:45.337803 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337808 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337813 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337828 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337846 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:07:45.337851 | orchestrator | 2025-11-01 14:07:45.337856 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-11-01 14:07:45.337861 | orchestrator | Saturday 01 November 2025 14:07:19 +0000 (0:00:03.231) 0:02:15.652 ***** 2025-11-01 14:07:45.337870 | orchestrator | 2025-11-01 14:07:45.337875 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-11-01 14:07:45.337879 | orchestrator | Saturday 01 November 2025 14:07:19 +0000 (0:00:00.069) 0:02:15.722 ***** 2025-11-01 14:07:45.337884 | orchestrator | 2025-11-01 14:07:45.337889 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-11-01 14:07:45.337894 | orchestrator | Saturday 01 November 2025 14:07:19 +0000 (0:00:00.089) 0:02:15.811 ***** 2025-11-01 14:07:45.337898 | orchestrator | 2025-11-01 14:07:45.337903 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-11-01 14:07:45.337908 | orchestrator | Saturday 01 November 2025 14:07:19 +0000 (0:00:00.075) 0:02:15.886 ***** 2025-11-01 14:07:45.337912 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:07:45.337917 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:07:45.337922 | orchestrator | 2025-11-01 14:07:45.337929 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-11-01 14:07:45.337933 | orchestrator | Saturday 01 November 2025 14:07:26 +0000 (0:00:06.344) 0:02:22.231 ***** 2025-11-01 14:07:45.337938 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:07:45.337943 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:07:45.337948 | orchestrator | 2025-11-01 14:07:45.337952 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-11-01 14:07:45.337957 | orchestrator | Saturday 01 November 2025 14:07:32 +0000 (0:00:06.241) 0:02:28.472 ***** 2025-11-01 14:07:45.337962 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:07:45.337966 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:07:45.337971 | orchestrator | 2025-11-01 14:07:45.337976 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-11-01 14:07:45.337981 | orchestrator | Saturday 01 November 2025 14:07:38 +0000 (0:00:06.558) 0:02:35.031 ***** 2025-11-01 14:07:45.337985 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:07:45.337990 | orchestrator | 2025-11-01 14:07:45.337995 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-11-01 14:07:45.337999 | orchestrator | Saturday 01 November 2025 14:07:39 +0000 (0:00:00.157) 0:02:35.188 ***** 2025-11-01 14:07:45.338004 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:07:45.338009 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:07:45.338014 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:07:45.338066 | orchestrator | 2025-11-01 14:07:45.338071 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-11-01 14:07:45.338076 | orchestrator | Saturday 01 November 2025 14:07:40 +0000 (0:00:00.870) 0:02:36.059 ***** 2025-11-01 14:07:45.338080 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:07:45.338085 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:07:45.338090 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:07:45.338095 | orchestrator | 2025-11-01 14:07:45.338099 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-11-01 14:07:45.338104 | orchestrator | Saturday 01 November 2025 14:07:40 +0000 (0:00:00.638) 0:02:36.698 ***** 2025-11-01 14:07:45.338109 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:07:45.338113 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:07:45.338118 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:07:45.338123 | orchestrator | 2025-11-01 14:07:45.338127 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-11-01 14:07:45.338132 | orchestrator | Saturday 01 November 2025 14:07:41 +0000 (0:00:00.826) 0:02:37.524 ***** 2025-11-01 14:07:45.338137 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:07:45.338141 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:07:45.338146 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:07:45.338151 | orchestrator | 2025-11-01 14:07:45.338155 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-11-01 14:07:45.338160 | orchestrator | Saturday 01 November 2025 14:07:42 +0000 (0:00:00.609) 0:02:38.134 ***** 2025-11-01 14:07:45.338165 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:07:45.338170 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:07:45.338179 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:07:45.338184 | orchestrator | 2025-11-01 14:07:45.338189 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-11-01 14:07:45.338193 | orchestrator | Saturday 01 November 2025 14:07:42 +0000 (0:00:00.718) 0:02:38.852 ***** 2025-11-01 14:07:45.338198 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:07:45.338203 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:07:45.338207 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:07:45.338212 | orchestrator | 2025-11-01 14:07:45.338217 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:07:45.338221 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-11-01 14:07:45.338230 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-11-01 14:07:45.338235 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-11-01 14:07:45.338240 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:07:45.338245 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:07:45.338249 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:07:45.338254 | orchestrator | 2025-11-01 14:07:45.338259 | orchestrator | 2025-11-01 14:07:45.338263 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:07:45.338268 | orchestrator | Saturday 01 November 2025 14:07:43 +0000 (0:00:00.892) 0:02:39.745 ***** 2025-11-01 14:07:45.338273 | orchestrator | =============================================================================== 2025-11-01 14:07:45.338278 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 33.44s 2025-11-01 14:07:45.338282 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 21.03s 2025-11-01 14:07:45.338287 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.25s 2025-11-01 14:07:45.338292 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.97s 2025-11-01 14:07:45.338296 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.80s 2025-11-01 14:07:45.338301 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.11s 2025-11-01 14:07:45.338306 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.11s 2025-11-01 14:07:45.338314 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.23s 2025-11-01 14:07:45.338318 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.87s 2025-11-01 14:07:45.338323 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.71s 2025-11-01 14:07:45.338328 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.54s 2025-11-01 14:07:45.338332 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.41s 2025-11-01 14:07:45.338337 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.20s 2025-11-01 14:07:45.338342 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.00s 2025-11-01 14:07:45.338346 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.57s 2025-11-01 14:07:45.338351 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.53s 2025-11-01 14:07:45.338356 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.51s 2025-11-01 14:07:45.338361 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.48s 2025-11-01 14:07:45.338371 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.35s 2025-11-01 14:07:45.338375 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 1.34s 2025-11-01 14:07:45.338380 | orchestrator | 2025-11-01 14:07:45 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:07:45.338385 | orchestrator | 2025-11-01 14:07:45 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:07:45.338390 | orchestrator | 2025-11-01 14:07:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:07:48.376596 | orchestrator | 2025-11-01 14:07:48 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:07:48.377022 | orchestrator | 2025-11-01 14:07:48 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:07:48.377117 | orchestrator | 2025-11-01 14:07:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:07:51.427815 | orchestrator | 2025-11-01 14:07:51 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:07:51.428701 | orchestrator | 2025-11-01 14:07:51 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:07:51.428744 | orchestrator | 2025-11-01 14:07:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:07:54.481332 | orchestrator | 2025-11-01 14:07:54 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:07:54.482664 | orchestrator | 2025-11-01 14:07:54 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:07:54.482699 | orchestrator | 2025-11-01 14:07:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:07:57.525530 | orchestrator | 2025-11-01 14:07:57 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:07:57.527039 | orchestrator | 2025-11-01 14:07:57 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:07:57.527071 | orchestrator | 2025-11-01 14:07:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:08:00.582185 | orchestrator | 2025-11-01 14:08:00 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:08:00.582446 | orchestrator | 2025-11-01 14:08:00 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:08:00.582520 | orchestrator | 2025-11-01 14:08:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:08:03.615069 | orchestrator | 2025-11-01 14:08:03 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:08:03.615666 | orchestrator | 2025-11-01 14:08:03 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:08:03.615953 | orchestrator | 2025-11-01 14:08:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:08:06.657814 | orchestrator | 2025-11-01 14:08:06 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:08:06.660384 | orchestrator | 2025-11-01 14:08:06 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:08:06.660422 | orchestrator | 2025-11-01 14:08:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:08:09.711811 | orchestrator | 2025-11-01 14:08:09 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:08:09.713777 | orchestrator | 2025-11-01 14:08:09 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:08:09.713817 | orchestrator | 2025-11-01 14:08:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:08:12.752439 | orchestrator | 2025-11-01 14:08:12 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:08:12.753144 | orchestrator | 2025-11-01 14:08:12 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:08:12.753178 | orchestrator | 2025-11-01 14:08:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:08:15.801004 | orchestrator | 2025-11-01 14:08:15 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:08:15.802973 | orchestrator | 2025-11-01 14:08:15 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:08:15.803023 | orchestrator | 2025-11-01 14:08:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:08:18.844716 | orchestrator | 2025-11-01 14:08:18 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:08:18.848125 | orchestrator | 2025-11-01 14:08:18 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:08:18.848210 | orchestrator | 2025-11-01 14:08:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:08:21.874351 | orchestrator | 2025-11-01 14:08:21 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:08:21.877281 | orchestrator | 2025-11-01 14:08:21 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:08:21.877313 | orchestrator | 2025-11-01 14:08:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:08:24.910570 | orchestrator | 2025-11-01 14:08:24 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:08:24.911153 | orchestrator | 2025-11-01 14:08:24 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:08:24.911182 | orchestrator | 2025-11-01 14:08:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:08:27.941594 | orchestrator | 2025-11-01 14:08:27 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:08:27.942282 | orchestrator | 2025-11-01 14:08:27 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:08:27.942316 | orchestrator | 2025-11-01 14:08:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:08:30.986840 | orchestrator | 2025-11-01 14:08:30 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:08:30.990542 | orchestrator | 2025-11-01 14:08:30 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:08:30.990578 | orchestrator | 2025-11-01 14:08:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:08:34.037549 | orchestrator | 2025-11-01 14:08:34 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:08:34.040017 | orchestrator | 2025-11-01 14:08:34 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:08:34.040072 | orchestrator | 2025-11-01 14:08:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:08:37.075754 | orchestrator | 2025-11-01 14:08:37 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:08:37.075939 | orchestrator | 2025-11-01 14:08:37 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:08:37.075960 | orchestrator | 2025-11-01 14:08:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:08:40.117401 | orchestrator | 2025-11-01 14:08:40 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:08:40.120310 | orchestrator | 2025-11-01 14:08:40 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:08:40.120344 | orchestrator | 2025-11-01 14:08:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:08:43.157776 | orchestrator | 2025-11-01 14:08:43 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:08:43.160103 | orchestrator | 2025-11-01 14:08:43 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:08:43.160132 | orchestrator | 2025-11-01 14:08:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:08:46.206259 | orchestrator | 2025-11-01 14:08:46 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:08:46.206388 | orchestrator | 2025-11-01 14:08:46 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:08:46.206405 | orchestrator | 2025-11-01 14:08:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:08:49.246209 | orchestrator | 2025-11-01 14:08:49 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:08:49.246563 | orchestrator | 2025-11-01 14:08:49 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:08:49.246591 | orchestrator | 2025-11-01 14:08:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:08:52.302389 | orchestrator | 2025-11-01 14:08:52 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:08:52.302529 | orchestrator | 2025-11-01 14:08:52 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:08:52.302547 | orchestrator | 2025-11-01 14:08:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:08:55.347795 | orchestrator | 2025-11-01 14:08:55 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:08:55.350597 | orchestrator | 2025-11-01 14:08:55 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:08:55.350630 | orchestrator | 2025-11-01 14:08:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:08:58.389251 | orchestrator | 2025-11-01 14:08:58 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:08:58.391619 | orchestrator | 2025-11-01 14:08:58 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:08:58.391828 | orchestrator | 2025-11-01 14:08:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:09:01.432404 | orchestrator | 2025-11-01 14:09:01 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:09:01.436335 | orchestrator | 2025-11-01 14:09:01 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:09:01.436840 | orchestrator | 2025-11-01 14:09:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:09:04.493983 | orchestrator | 2025-11-01 14:09:04 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:09:04.498186 | orchestrator | 2025-11-01 14:09:04 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:09:04.498231 | orchestrator | 2025-11-01 14:09:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:09:07.546310 | orchestrator | 2025-11-01 14:09:07 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:09:07.547781 | orchestrator | 2025-11-01 14:09:07 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:09:07.547817 | orchestrator | 2025-11-01 14:09:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:09:10.590212 | orchestrator | 2025-11-01 14:09:10 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:09:10.591959 | orchestrator | 2025-11-01 14:09:10 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:09:10.592027 | orchestrator | 2025-11-01 14:09:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:09:13.645889 | orchestrator | 2025-11-01 14:09:13 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:09:13.646523 | orchestrator | 2025-11-01 14:09:13 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:09:13.646559 | orchestrator | 2025-11-01 14:09:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:09:16.689419 | orchestrator | 2025-11-01 14:09:16 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:09:16.690289 | orchestrator | 2025-11-01 14:09:16 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:09:16.690322 | orchestrator | 2025-11-01 14:09:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:09:19.727039 | orchestrator | 2025-11-01 14:09:19 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:09:19.727358 | orchestrator | 2025-11-01 14:09:19 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:09:19.727374 | orchestrator | 2025-11-01 14:09:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:09:22.764051 | orchestrator | 2025-11-01 14:09:22 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:09:22.766446 | orchestrator | 2025-11-01 14:09:22 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:09:22.766567 | orchestrator | 2025-11-01 14:09:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:09:25.809164 | orchestrator | 2025-11-01 14:09:25 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:09:25.811255 | orchestrator | 2025-11-01 14:09:25 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:09:25.811281 | orchestrator | 2025-11-01 14:09:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:09:28.847851 | orchestrator | 2025-11-01 14:09:28 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:09:28.850200 | orchestrator | 2025-11-01 14:09:28 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:09:28.850226 | orchestrator | 2025-11-01 14:09:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:09:31.891363 | orchestrator | 2025-11-01 14:09:31 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:09:31.891654 | orchestrator | 2025-11-01 14:09:31 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:09:31.891744 | orchestrator | 2025-11-01 14:09:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:09:34.945587 | orchestrator | 2025-11-01 14:09:34 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:09:34.952289 | orchestrator | 2025-11-01 14:09:34 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:09:34.952320 | orchestrator | 2025-11-01 14:09:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:09:38.020152 | orchestrator | 2025-11-01 14:09:38 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:09:38.025945 | orchestrator | 2025-11-01 14:09:38 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:09:38.025976 | orchestrator | 2025-11-01 14:09:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:09:41.073693 | orchestrator | 2025-11-01 14:09:41 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:09:41.075371 | orchestrator | 2025-11-01 14:09:41 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:09:41.075463 | orchestrator | 2025-11-01 14:09:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:09:44.120648 | orchestrator | 2025-11-01 14:09:44 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:09:44.122129 | orchestrator | 2025-11-01 14:09:44 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:09:44.123681 | orchestrator | 2025-11-01 14:09:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:09:47.162201 | orchestrator | 2025-11-01 14:09:47 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:09:47.167567 | orchestrator | 2025-11-01 14:09:47 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:09:47.167636 | orchestrator | 2025-11-01 14:09:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:09:50.209678 | orchestrator | 2025-11-01 14:09:50 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:09:50.213720 | orchestrator | 2025-11-01 14:09:50 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:09:50.213752 | orchestrator | 2025-11-01 14:09:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:09:53.258187 | orchestrator | 2025-11-01 14:09:53 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:09:53.259521 | orchestrator | 2025-11-01 14:09:53 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:09:53.259552 | orchestrator | 2025-11-01 14:09:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:09:56.311262 | orchestrator | 2025-11-01 14:09:56 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:09:56.314599 | orchestrator | 2025-11-01 14:09:56 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:09:56.315625 | orchestrator | 2025-11-01 14:09:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:09:59.382608 | orchestrator | 2025-11-01 14:09:59 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:09:59.384828 | orchestrator | 2025-11-01 14:09:59 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:09:59.384859 | orchestrator | 2025-11-01 14:09:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:10:02.438569 | orchestrator | 2025-11-01 14:10:02 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:10:02.439820 | orchestrator | 2025-11-01 14:10:02 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:10:02.439851 | orchestrator | 2025-11-01 14:10:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:10:05.500249 | orchestrator | 2025-11-01 14:10:05 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:10:05.503376 | orchestrator | 2025-11-01 14:10:05 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:10:05.503408 | orchestrator | 2025-11-01 14:10:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:10:08.550978 | orchestrator | 2025-11-01 14:10:08 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:10:08.551648 | orchestrator | 2025-11-01 14:10:08 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:10:08.551680 | orchestrator | 2025-11-01 14:10:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:10:11.599554 | orchestrator | 2025-11-01 14:10:11 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:10:11.600286 | orchestrator | 2025-11-01 14:10:11 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:10:11.600317 | orchestrator | 2025-11-01 14:10:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:10:14.649686 | orchestrator | 2025-11-01 14:10:14 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:10:14.650794 | orchestrator | 2025-11-01 14:10:14 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:10:14.653135 | orchestrator | 2025-11-01 14:10:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:10:17.705008 | orchestrator | 2025-11-01 14:10:17 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:10:17.707754 | orchestrator | 2025-11-01 14:10:17 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:10:17.708272 | orchestrator | 2025-11-01 14:10:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:10:20.748675 | orchestrator | 2025-11-01 14:10:20 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:10:20.750197 | orchestrator | 2025-11-01 14:10:20 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:10:20.750591 | orchestrator | 2025-11-01 14:10:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:10:23.802459 | orchestrator | 2025-11-01 14:10:23 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:10:23.804971 | orchestrator | 2025-11-01 14:10:23 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:10:23.805217 | orchestrator | 2025-11-01 14:10:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:10:26.859411 | orchestrator | 2025-11-01 14:10:26 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:10:26.863753 | orchestrator | 2025-11-01 14:10:26 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:10:26.863786 | orchestrator | 2025-11-01 14:10:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:10:29.908727 | orchestrator | 2025-11-01 14:10:29 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:10:29.910724 | orchestrator | 2025-11-01 14:10:29 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:10:29.910757 | orchestrator | 2025-11-01 14:10:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:10:32.972641 | orchestrator | 2025-11-01 14:10:32 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:10:32.975290 | orchestrator | 2025-11-01 14:10:32 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:10:32.975891 | orchestrator | 2025-11-01 14:10:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:10:36.015915 | orchestrator | 2025-11-01 14:10:36 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:10:36.017916 | orchestrator | 2025-11-01 14:10:36 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state STARTED 2025-11-01 14:10:36.017954 | orchestrator | 2025-11-01 14:10:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:10:39.070867 | orchestrator | 2025-11-01 14:10:39 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:10:39.070954 | orchestrator | 2025-11-01 14:10:39 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:10:39.073152 | orchestrator | 2025-11-01 14:10:39 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:10:39.082983 | orchestrator | 2025-11-01 14:10:39 | INFO  | Task 35b3d63f-4e9c-4f01-9382-fd66f067cba5 is in state SUCCESS 2025-11-01 14:10:39.087018 | orchestrator | 2025-11-01 14:10:39.087056 | orchestrator | 2025-11-01 14:10:39.087065 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 14:10:39.087074 | orchestrator | 2025-11-01 14:10:39.087136 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 14:10:39.087146 | orchestrator | Saturday 01 November 2025 14:03:40 +0000 (0:00:00.429) 0:00:00.429 ***** 2025-11-01 14:10:39.087153 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:10:39.087162 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:10:39.087170 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:10:39.087178 | orchestrator | 2025-11-01 14:10:39.087186 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 14:10:39.087194 | orchestrator | Saturday 01 November 2025 14:03:41 +0000 (0:00:00.505) 0:00:00.935 ***** 2025-11-01 14:10:39.087203 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-11-01 14:10:39.087210 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-11-01 14:10:39.087218 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-11-01 14:10:39.087226 | orchestrator | 2025-11-01 14:10:39.087233 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-11-01 14:10:39.087241 | orchestrator | 2025-11-01 14:10:39.087249 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-11-01 14:10:39.087258 | orchestrator | Saturday 01 November 2025 14:03:42 +0000 (0:00:00.863) 0:00:01.798 ***** 2025-11-01 14:10:39.087266 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:10:39.087274 | orchestrator | 2025-11-01 14:10:39.087282 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-11-01 14:10:39.087290 | orchestrator | Saturday 01 November 2025 14:03:43 +0000 (0:00:01.303) 0:00:03.102 ***** 2025-11-01 14:10:39.087297 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:10:39.087305 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:10:39.087313 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:10:39.087321 | orchestrator | 2025-11-01 14:10:39.087328 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-11-01 14:10:39.087359 | orchestrator | Saturday 01 November 2025 14:03:45 +0000 (0:00:02.004) 0:00:05.106 ***** 2025-11-01 14:10:39.087368 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:10:39.087376 | orchestrator | 2025-11-01 14:10:39.087384 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-11-01 14:10:39.087392 | orchestrator | Saturday 01 November 2025 14:03:46 +0000 (0:00:01.488) 0:00:06.594 ***** 2025-11-01 14:10:39.087399 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:10:39.087407 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:10:39.087415 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:10:39.087423 | orchestrator | 2025-11-01 14:10:39.087431 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-11-01 14:10:39.087438 | orchestrator | Saturday 01 November 2025 14:03:47 +0000 (0:00:00.875) 0:00:07.469 ***** 2025-11-01 14:10:39.087446 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-11-01 14:10:39.087454 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-11-01 14:10:39.087462 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-11-01 14:10:39.087470 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-11-01 14:10:39.087511 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-11-01 14:10:39.087569 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-11-01 14:10:39.087579 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-11-01 14:10:39.087587 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-11-01 14:10:39.087594 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-11-01 14:10:39.087603 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-11-01 14:10:39.087612 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-11-01 14:10:39.087621 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-11-01 14:10:39.087630 | orchestrator | 2025-11-01 14:10:39.087639 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-11-01 14:10:39.087648 | orchestrator | Saturday 01 November 2025 14:03:53 +0000 (0:00:05.454) 0:00:12.929 ***** 2025-11-01 14:10:39.087721 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-11-01 14:10:39.087730 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-11-01 14:10:39.087739 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-11-01 14:10:39.087749 | orchestrator | 2025-11-01 14:10:39.087757 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-11-01 14:10:39.087766 | orchestrator | Saturday 01 November 2025 14:03:54 +0000 (0:00:01.732) 0:00:14.662 ***** 2025-11-01 14:10:39.087775 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-11-01 14:10:39.087784 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-11-01 14:10:39.087793 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-11-01 14:10:39.087802 | orchestrator | 2025-11-01 14:10:39.087811 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-11-01 14:10:39.087820 | orchestrator | Saturday 01 November 2025 14:03:57 +0000 (0:00:02.401) 0:00:17.064 ***** 2025-11-01 14:10:39.087830 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-11-01 14:10:39.087858 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.087878 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-11-01 14:10:39.087888 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.087898 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-11-01 14:10:39.087907 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.087915 | orchestrator | 2025-11-01 14:10:39.087924 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-11-01 14:10:39.087933 | orchestrator | Saturday 01 November 2025 14:03:58 +0000 (0:00:01.067) 0:00:18.131 ***** 2025-11-01 14:10:39.087944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-11-01 14:10:39.087958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-11-01 14:10:39.087972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-11-01 14:10:39.087984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-01 14:10:39.087994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-01 14:10:39.088007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-01 14:10:39.088016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-01 14:10:39.088025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-01 14:10:39.088033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-01 14:10:39.088046 | orchestrator | 2025-11-01 14:10:39.088054 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-11-01 14:10:39.088062 | orchestrator | Saturday 01 November 2025 14:04:01 +0000 (0:00:03.409) 0:00:21.541 ***** 2025-11-01 14:10:39.088070 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.088103 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.088112 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.088120 | orchestrator | 2025-11-01 14:10:39.088128 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-11-01 14:10:39.088136 | orchestrator | Saturday 01 November 2025 14:04:03 +0000 (0:00:02.117) 0:00:23.658 ***** 2025-11-01 14:10:39.088144 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-11-01 14:10:39.088152 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-11-01 14:10:39.088159 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-11-01 14:10:39.088167 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-11-01 14:10:39.088195 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-11-01 14:10:39.088204 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-11-01 14:10:39.088240 | orchestrator | 2025-11-01 14:10:39.088249 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-11-01 14:10:39.088261 | orchestrator | Saturday 01 November 2025 14:04:06 +0000 (0:00:02.879) 0:00:26.537 ***** 2025-11-01 14:10:39.088269 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.088277 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.088285 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.088293 | orchestrator | 2025-11-01 14:10:39.088300 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-11-01 14:10:39.088380 | orchestrator | Saturday 01 November 2025 14:04:08 +0000 (0:00:01.810) 0:00:28.347 ***** 2025-11-01 14:10:39.088389 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:10:39.088397 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:10:39.088405 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:10:39.088412 | orchestrator | 2025-11-01 14:10:39.088420 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-11-01 14:10:39.088428 | orchestrator | Saturday 01 November 2025 14:04:12 +0000 (0:00:03.761) 0:00:32.109 ***** 2025-11-01 14:10:39.088436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-01 14:10:39.088453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 14:10:39.088462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 14:10:39.088476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c5148b0df7ad4d6bd655707a504fb5d72f460d17', '__omit_place_holder__c5148b0df7ad4d6bd655707a504fb5d72f460d17'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-11-01 14:10:39.088502 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.088511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-01 14:10:39.088523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-01 14:10:39.088553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 14:10:39.088562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 14:10:39.088577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 14:10:39.088591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 14:10:39.088599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c5148b0df7ad4d6bd655707a504fb5d72f460d17', '__omit_place_holder__c5148b0df7ad4d6bd655707a504fb5d72f460d17'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-11-01 14:10:39.088608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c5148b0df7ad4d6bd655707a504fb5d72f460d17', '__omit_place_holder__c5148b0df7ad4d6bd655707a504fb5d72f460d17'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-11-01 14:10:39.088616 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.088624 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.088632 | orchestrator | 2025-11-01 14:10:39.088640 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-11-01 14:10:39.088651 | orchestrator | Saturday 01 November 2025 14:04:13 +0000 (0:00:00.774) 0:00:32.884 ***** 2025-11-01 14:10:39.088660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-11-01 14:10:39.088668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-11-01 14:10:39.088683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-11-01 14:10:39.088699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-01 14:10:39.088708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 14:10:39.088716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c5148b0df7ad4d6bd655707a504fb5d72f460d17', '__omit_place_holder__c5148b0df7ad4d6bd655707a504fb5d72f460d17'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-11-01 14:10:39.088728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-01 14:10:39.088737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 14:10:39.088745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c5148b0df7ad4d6bd655707a504fb5d72f460d17', '__omit_place_holder__c5148b0df7ad4d6bd655707a504fb5d72f460d17'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-11-01 14:10:39.088792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-01 14:10:39.088803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 14:10:39.088811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c5148b0df7ad4d6bd655707a504fb5d72f460d17', '__omit_place_holder__c5148b0df7ad4d6bd655707a504fb5d72f460d17'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-11-01 14:10:39.088820 | orchestrator | 2025-11-01 14:10:39.088828 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-11-01 14:10:39.088836 | orchestrator | Saturday 01 November 2025 14:04:17 +0000 (0:00:04.573) 0:00:37.458 ***** 2025-11-01 14:10:39.088848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-11-01 14:10:39.088856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-11-01 14:10:39.088864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-11-01 14:10:39.088935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-01 14:10:39.088945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-01 14:10:39.088977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-01 14:10:39.088986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-01 14:10:39.088999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-01 14:10:39.089007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-01 14:10:39.089016 | orchestrator | 2025-11-01 14:10:39.089024 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-11-01 14:10:39.089032 | orchestrator | Saturday 01 November 2025 14:04:22 +0000 (0:00:04.961) 0:00:42.419 ***** 2025-11-01 14:10:39.089045 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-11-01 14:10:39.089053 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-11-01 14:10:39.089061 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-11-01 14:10:39.089069 | orchestrator | 2025-11-01 14:10:39.089077 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-11-01 14:10:39.089085 | orchestrator | Saturday 01 November 2025 14:04:25 +0000 (0:00:03.333) 0:00:45.752 ***** 2025-11-01 14:10:39.089093 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-11-01 14:10:39.089100 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-11-01 14:10:39.089133 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-11-01 14:10:39.089141 | orchestrator | 2025-11-01 14:10:39.091416 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-11-01 14:10:39.091449 | orchestrator | Saturday 01 November 2025 14:04:30 +0000 (0:00:04.841) 0:00:50.594 ***** 2025-11-01 14:10:39.091457 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.091465 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.091473 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.091518 | orchestrator | 2025-11-01 14:10:39.091529 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-11-01 14:10:39.091559 | orchestrator | Saturday 01 November 2025 14:04:32 +0000 (0:00:01.621) 0:00:52.215 ***** 2025-11-01 14:10:39.091567 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-11-01 14:10:39.091576 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-11-01 14:10:39.091584 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-11-01 14:10:39.091592 | orchestrator | 2025-11-01 14:10:39.091600 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-11-01 14:10:39.091607 | orchestrator | Saturday 01 November 2025 14:04:36 +0000 (0:00:04.309) 0:00:56.524 ***** 2025-11-01 14:10:39.091683 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-11-01 14:10:39.091692 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-11-01 14:10:39.091700 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-11-01 14:10:39.091708 | orchestrator | 2025-11-01 14:10:39.091716 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-11-01 14:10:39.091724 | orchestrator | Saturday 01 November 2025 14:04:40 +0000 (0:00:03.755) 0:01:00.279 ***** 2025-11-01 14:10:39.091732 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-11-01 14:10:39.091741 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-11-01 14:10:39.091748 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-11-01 14:10:39.091756 | orchestrator | 2025-11-01 14:10:39.091764 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-11-01 14:10:39.091772 | orchestrator | Saturday 01 November 2025 14:04:43 +0000 (0:00:02.751) 0:01:03.031 ***** 2025-11-01 14:10:39.091780 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-11-01 14:10:39.091787 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-11-01 14:10:39.091795 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-11-01 14:10:39.091812 | orchestrator | 2025-11-01 14:10:39.091820 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-11-01 14:10:39.091828 | orchestrator | Saturday 01 November 2025 14:04:45 +0000 (0:00:02.536) 0:01:05.567 ***** 2025-11-01 14:10:39.091836 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:10:39.091844 | orchestrator | 2025-11-01 14:10:39.091852 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-11-01 14:10:39.091864 | orchestrator | Saturday 01 November 2025 14:04:46 +0000 (0:00:00.905) 0:01:06.472 ***** 2025-11-01 14:10:39.091874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-11-01 14:10:39.091883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-11-01 14:10:39.091900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-11-01 14:10:39.091909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-01 14:10:39.091918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-01 14:10:39.091926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-01 14:10:39.091942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-01 14:10:39.091952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-01 14:10:39.091960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-01 14:10:39.091968 | orchestrator | 2025-11-01 14:10:39.091976 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-11-01 14:10:39.091984 | orchestrator | Saturday 01 November 2025 14:04:50 +0000 (0:00:03.984) 0:01:10.457 ***** 2025-11-01 14:10:39.091998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-01 14:10:39.092008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 14:10:39.092018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 14:10:39.092031 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.092041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-01 14:10:39.092053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 14:10:39.092063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 14:10:39.092072 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.092082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-01 14:10:39.092118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 14:10:39.092128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 14:10:39.092137 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.092167 | orchestrator | 2025-11-01 14:10:39.092176 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-11-01 14:10:39.092185 | orchestrator | Saturday 01 November 2025 14:04:51 +0000 (0:00:01.210) 0:01:11.668 ***** 2025-11-01 14:10:39.092232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-01 14:10:39.092246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-01 14:10:39.092256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 14:10:39.092265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 14:10:39.092281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 14:10:39.092290 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.092299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 14:10:39.092308 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.092322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-01 14:10:39.092332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 14:10:39.092341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 14:10:39.092351 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.092360 | orchestrator | 2025-11-01 14:10:39.092369 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-11-01 14:10:39.092376 | orchestrator | Saturday 01 November 2025 14:04:53 +0000 (0:00:01.399) 0:01:13.068 ***** 2025-11-01 14:10:39.092385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-01 14:10:39.092397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 14:10:39.092406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 14:10:39.092418 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.092444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-01 14:10:39.092452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 14:10:39.092461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 14:10:39.092469 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.092480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-01 14:10:39.092504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 14:10:39.092517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 14:10:39.092525 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.092533 | orchestrator | 2025-11-01 14:10:39.092541 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-11-01 14:10:39.092549 | orchestrator | Saturday 01 November 2025 14:04:54 +0000 (0:00:01.042) 0:01:14.111 ***** 2025-11-01 14:10:39.092562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-01 14:10:39.092570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 14:10:39.092578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 14:10:39.092586 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.092598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-01 14:10:39.092606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 14:10:39.092615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 14:10:39.092623 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.092635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-01 14:10:39.092652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 14:10:39.092660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 14:10:39.092668 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.092676 | orchestrator | 2025-11-01 14:10:39.092684 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-11-01 14:10:39.092692 | orchestrator | Saturday 01 November 2025 14:04:55 +0000 (0:00:00.808) 0:01:14.919 ***** 2025-11-01 14:10:39.092709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-01 14:10:39.092721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 14:10:39.092730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 14:10:39.092738 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.092750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-01 14:10:39.092763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 14:10:39.092796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 14:10:39.092805 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.092813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-01 14:10:39.092825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 14:10:39.092834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 14:10:39.092842 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.092894 | orchestrator | 2025-11-01 14:10:39.092903 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-11-01 14:10:39.092911 | orchestrator | Saturday 01 November 2025 14:04:56 +0000 (0:00:01.137) 0:01:16.057 ***** 2025-11-01 14:10:39.092920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-01 14:10:39.092938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 14:10:39.092947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 14:10:39.092955 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.092963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-01 14:10:39.092971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 14:10:39.092983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 14:10:39.093011 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.093021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-01 14:10:39.093038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 14:10:39.093047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 14:10:39.093055 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.093063 | orchestrator | 2025-11-01 14:10:39.093071 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-11-01 14:10:39.093079 | orchestrator | Saturday 01 November 2025 14:04:57 +0000 (0:00:01.042) 0:01:17.100 ***** 2025-11-01 14:10:39.093087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-01 14:10:39.093095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 14:10:39.093107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 14:10:39.093115 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.093123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-01 14:10:39.093136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 14:10:39.093150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 14:10:39.093159 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.093167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-01 14:10:39.093175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 14:10:39.093183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 14:10:39.093191 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.093199 | orchestrator | 2025-11-01 14:10:39.093216 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-11-01 14:10:39.093225 | orchestrator | Saturday 01 November 2025 14:04:58 +0000 (0:00:00.876) 0:01:17.976 ***** 2025-11-01 14:10:39.093244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-01 14:10:39.093257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 14:10:39.093265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 14:10:39.093274 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.093287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-01 14:10:39.093295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 14:10:39.093304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 14:10:39.093312 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.093320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-01 14:10:39.093335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 14:10:39.093344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 14:10:39.093352 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.093360 | orchestrator | 2025-11-01 14:10:39.093368 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-11-01 14:10:39.093376 | orchestrator | Saturday 01 November 2025 14:04:59 +0000 (0:00:01.501) 0:01:19.478 ***** 2025-11-01 14:10:39.093384 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-11-01 14:10:39.093392 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-11-01 14:10:39.093404 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-11-01 14:10:39.093412 | orchestrator | 2025-11-01 14:10:39.093420 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-11-01 14:10:39.093427 | orchestrator | Saturday 01 November 2025 14:05:01 +0000 (0:00:02.194) 0:01:21.672 ***** 2025-11-01 14:10:39.093435 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-11-01 14:10:39.093443 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-11-01 14:10:39.093451 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-11-01 14:10:39.093459 | orchestrator | 2025-11-01 14:10:39.093467 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-11-01 14:10:39.093474 | orchestrator | Saturday 01 November 2025 14:05:03 +0000 (0:00:01.698) 0:01:23.371 ***** 2025-11-01 14:10:39.093572 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-11-01 14:10:39.093583 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-11-01 14:10:39.093591 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-11-01 14:10:39.093599 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.093607 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-11-01 14:10:39.093615 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.093651 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-11-01 14:10:39.093660 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-11-01 14:10:39.093668 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.093675 | orchestrator | 2025-11-01 14:10:39.093683 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-11-01 14:10:39.093698 | orchestrator | Saturday 01 November 2025 14:05:04 +0000 (0:00:01.258) 0:01:24.630 ***** 2025-11-01 14:10:39.093706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-11-01 14:10:39.093725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-11-01 14:10:39.093734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-11-01 14:10:39.093747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-01 14:10:39.093755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-01 14:10:39.093762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-01 14:10:39.093775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-01 14:10:39.093782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-01 14:10:39.093792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-01 14:10:39.093799 | orchestrator | 2025-11-01 14:10:39.093806 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-11-01 14:10:39.093812 | orchestrator | Saturday 01 November 2025 14:05:07 +0000 (0:00:03.124) 0:01:27.754 ***** 2025-11-01 14:10:39.093819 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:10:39.093826 | orchestrator | 2025-11-01 14:10:39.093832 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-11-01 14:10:39.093839 | orchestrator | Saturday 01 November 2025 14:05:08 +0000 (0:00:00.830) 0:01:28.585 ***** 2025-11-01 14:10:39.093847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-11-01 14:10:39.093859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-11-01 14:10:39.093866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-11-01 14:10:39.093880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-11-01 14:10:39.093889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.093897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.093904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.095050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.095097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-11-01 14:10:39.095113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-11-01 14:10:39.095120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.095131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.095138 | orchestrator | 2025-11-01 14:10:39.095145 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-11-01 14:10:39.095152 | orchestrator | Saturday 01 November 2025 14:05:13 +0000 (0:00:05.012) 0:01:33.598 ***** 2025-11-01 14:10:39.095159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-11-01 14:10:39.095172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-11-01 14:10:39.095180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-11-01 14:10:39.095191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-11-01 14:10:39.095198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.095208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.095215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.095222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.095237 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.095244 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.095255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-11-01 14:10:39.095269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-11-01 14:10:39.095276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.095286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.095294 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.095301 | orchestrator | 2025-11-01 14:10:39.095307 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-11-01 14:10:39.095314 | orchestrator | Saturday 01 November 2025 14:05:15 +0000 (0:00:01.554) 0:01:35.153 ***** 2025-11-01 14:10:39.095321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-11-01 14:10:39.095329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-11-01 14:10:39.095336 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.095343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-11-01 14:10:39.095350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-11-01 14:10:39.095356 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.095363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-11-01 14:10:39.095370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-11-01 14:10:39.095380 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.095387 | orchestrator | 2025-11-01 14:10:39.095397 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-11-01 14:10:39.095404 | orchestrator | Saturday 01 November 2025 14:05:16 +0000 (0:00:01.529) 0:01:36.682 ***** 2025-11-01 14:10:39.095410 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.095417 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.095423 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.095430 | orchestrator | 2025-11-01 14:10:39.095436 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-11-01 14:10:39.095443 | orchestrator | Saturday 01 November 2025 14:05:18 +0000 (0:00:01.440) 0:01:38.123 ***** 2025-11-01 14:10:39.095449 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.095456 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.095462 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.095469 | orchestrator | 2025-11-01 14:10:39.095475 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-11-01 14:10:39.095556 | orchestrator | Saturday 01 November 2025 14:05:20 +0000 (0:00:02.362) 0:01:40.485 ***** 2025-11-01 14:10:39.095594 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:10:39.095601 | orchestrator | 2025-11-01 14:10:39.095608 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-11-01 14:10:39.095615 | orchestrator | Saturday 01 November 2025 14:05:22 +0000 (0:00:01.916) 0:01:42.402 ***** 2025-11-01 14:10:39.095623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 14:10:39.095635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.095643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.095651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 14:10:39.095668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.095676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 14:10:39.095683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.095693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.095701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.095712 | orchestrator | 2025-11-01 14:10:39.095719 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-11-01 14:10:39.095726 | orchestrator | Saturday 01 November 2025 14:05:26 +0000 (0:00:04.260) 0:01:46.662 ***** 2025-11-01 14:10:39.095756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-01 14:10:39.095763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.095771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.095778 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.095788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-01 14:10:39.095796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.095807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.095815 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.095826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-01 14:10:39.095833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.095841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.095848 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.095855 | orchestrator | 2025-11-01 14:10:39.095862 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-11-01 14:10:39.095869 | orchestrator | Saturday 01 November 2025 14:05:27 +0000 (0:00:00.781) 0:01:47.443 ***** 2025-11-01 14:10:39.095876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-11-01 14:10:39.095887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-11-01 14:10:39.095898 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.095905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-11-01 14:10:39.095913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-11-01 14:10:39.095920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-11-01 14:10:39.095927 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.095935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-11-01 14:10:39.095942 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.095950 | orchestrator | 2025-11-01 14:10:39.095962 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-11-01 14:10:39.095969 | orchestrator | Saturday 01 November 2025 14:05:29 +0000 (0:00:01.755) 0:01:49.198 ***** 2025-11-01 14:10:39.095975 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.095981 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.095987 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.095993 | orchestrator | 2025-11-01 14:10:39.095999 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-11-01 14:10:39.096005 | orchestrator | Saturday 01 November 2025 14:05:31 +0000 (0:00:01.596) 0:01:50.794 ***** 2025-11-01 14:10:39.096012 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.096018 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.096024 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.096030 | orchestrator | 2025-11-01 14:10:39.096039 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-11-01 14:10:39.096045 | orchestrator | Saturday 01 November 2025 14:05:33 +0000 (0:00:02.308) 0:01:53.103 ***** 2025-11-01 14:10:39.096051 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.096057 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.096064 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.096070 | orchestrator | 2025-11-01 14:10:39.096082 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-11-01 14:10:39.096089 | orchestrator | Saturday 01 November 2025 14:05:33 +0000 (0:00:00.402) 0:01:53.506 ***** 2025-11-01 14:10:39.096095 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:10:39.096101 | orchestrator | 2025-11-01 14:10:39.096107 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-11-01 14:10:39.096113 | orchestrator | Saturday 01 November 2025 14:05:34 +0000 (0:00:01.042) 0:01:54.549 ***** 2025-11-01 14:10:39.096120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-11-01 14:10:39.096159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-11-01 14:10:39.096167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-11-01 14:10:39.096173 | orchestrator | 2025-11-01 14:10:39.096179 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-11-01 14:10:39.096213 | orchestrator | Saturday 01 November 2025 14:05:39 +0000 (0:00:04.385) 0:01:58.934 ***** 2025-11-01 14:10:39.096224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-11-01 14:10:39.096231 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.096237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-11-01 14:10:39.096243 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.096250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-11-01 14:10:39.096281 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.096288 | orchestrator | 2025-11-01 14:10:39.096294 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-11-01 14:10:39.096300 | orchestrator | Saturday 01 November 2025 14:05:41 +0000 (0:00:02.635) 0:02:01.570 ***** 2025-11-01 14:10:39.096320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-11-01 14:10:39.096330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-11-01 14:10:39.096337 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.096344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-11-01 14:10:39.096351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-11-01 14:10:39.096360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-11-01 14:10:39.096367 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.096373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-11-01 14:10:39.096380 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.096386 | orchestrator | 2025-11-01 14:10:39.096392 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-11-01 14:10:39.096398 | orchestrator | Saturday 01 November 2025 14:05:45 +0000 (0:00:03.366) 0:02:04.937 ***** 2025-11-01 14:10:39.096411 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.096418 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.096424 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.096430 | orchestrator | 2025-11-01 14:10:39.096436 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-11-01 14:10:39.096442 | orchestrator | Saturday 01 November 2025 14:05:46 +0000 (0:00:01.115) 0:02:06.053 ***** 2025-11-01 14:10:39.096448 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.096454 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.096460 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.096466 | orchestrator | 2025-11-01 14:10:39.096473 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-11-01 14:10:39.096479 | orchestrator | Saturday 01 November 2025 14:05:47 +0000 (0:00:01.365) 0:02:07.418 ***** 2025-11-01 14:10:39.096498 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:10:39.096504 | orchestrator | 2025-11-01 14:10:39.096510 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-11-01 14:10:39.096516 | orchestrator | Saturday 01 November 2025 14:05:48 +0000 (0:00:00.950) 0:02:08.369 ***** 2025-11-01 14:10:39.096523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 14:10:39.096533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.096540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.096550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.096569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 14:10:39.096576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.096585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.096592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.096608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 14:10:39.096619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.096626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.096633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.096639 | orchestrator | 2025-11-01 14:10:39.096648 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-11-01 14:10:39.096654 | orchestrator | Saturday 01 November 2025 14:05:53 +0000 (0:00:04.739) 0:02:13.108 ***** 2025-11-01 14:10:39.096661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-01 14:10:39.096668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.096681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-01 14:10:39.096688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.096694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.096704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.096717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.096727 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.096738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.096744 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.096751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-01 14:10:39.096757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.096780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.096787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.096797 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.096803 | orchestrator | 2025-11-01 14:10:39.096809 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-11-01 14:10:39.096816 | orchestrator | Saturday 01 November 2025 14:05:54 +0000 (0:00:01.010) 0:02:14.119 ***** 2025-11-01 14:10:39.096822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-11-01 14:10:39.096832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-11-01 14:10:39.096861 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.096868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-11-01 14:10:39.096874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-11-01 14:10:39.096880 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.096887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-11-01 14:10:39.096893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-11-01 14:10:39.096899 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.096905 | orchestrator | 2025-11-01 14:10:39.096912 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-11-01 14:10:39.096918 | orchestrator | Saturday 01 November 2025 14:05:55 +0000 (0:00:00.998) 0:02:15.118 ***** 2025-11-01 14:10:39.096924 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.096930 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.096936 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.096942 | orchestrator | 2025-11-01 14:10:39.096948 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-11-01 14:10:39.096954 | orchestrator | Saturday 01 November 2025 14:05:56 +0000 (0:00:01.518) 0:02:16.636 ***** 2025-11-01 14:10:39.096960 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.096966 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.096972 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.096978 | orchestrator | 2025-11-01 14:10:39.096984 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-11-01 14:10:39.096991 | orchestrator | Saturday 01 November 2025 14:05:59 +0000 (0:00:02.209) 0:02:18.845 ***** 2025-11-01 14:10:39.096997 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.097003 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.097009 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.097015 | orchestrator | 2025-11-01 14:10:39.097021 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-11-01 14:10:39.097027 | orchestrator | Saturday 01 November 2025 14:05:59 +0000 (0:00:00.619) 0:02:19.465 ***** 2025-11-01 14:10:39.097033 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.097039 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.097045 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.097051 | orchestrator | 2025-11-01 14:10:39.097057 | orchestrator | TASK [include_role : designate] ************************************************ 2025-11-01 14:10:39.097064 | orchestrator | Saturday 01 November 2025 14:06:00 +0000 (0:00:00.332) 0:02:19.798 ***** 2025-11-01 14:10:39.097077 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:10:39.097084 | orchestrator | 2025-11-01 14:10:39.097090 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-11-01 14:10:39.097096 | orchestrator | Saturday 01 November 2025 14:06:00 +0000 (0:00:00.897) 0:02:20.696 ***** 2025-11-01 14:10:39.097103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 14:10:39.097112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-01 14:10:39.097119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.097126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.097132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 14:10:39.097148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.097155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-01 14:10:39.097959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 14:10:39.097982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.097990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.097997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.098009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-01 14:10:39.098060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.098067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.098107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.098115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.098122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.098128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.098147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.098154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.098160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.098167 | orchestrator | 2025-11-01 14:10:39.098173 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-11-01 14:10:39.098179 | orchestrator | Saturday 01 November 2025 14:06:05 +0000 (0:00:04.144) 0:02:24.840 ***** 2025-11-01 14:10:39.098214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 14:10:39.098223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-01 14:10:39.098229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.098244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.098251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.098312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.098644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.098664 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.098671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 14:10:39.098678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-01 14:10:39.098694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.098706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.098713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.098773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.098782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.098788 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.098795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 14:10:39.098807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-01 14:10:39.098817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.098876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.098882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.098925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.098933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.098944 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.098949 | orchestrator | 2025-11-01 14:10:39.098955 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-11-01 14:10:39.098960 | orchestrator | Saturday 01 November 2025 14:06:06 +0000 (0:00:01.058) 0:02:25.899 ***** 2025-11-01 14:10:39.098966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-11-01 14:10:39.098973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-11-01 14:10:39.098978 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.098984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-11-01 14:10:39.098989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-11-01 14:10:39.098995 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.099000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-11-01 14:10:39.099009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-11-01 14:10:39.099015 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.099020 | orchestrator | 2025-11-01 14:10:39.099026 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-11-01 14:10:39.099031 | orchestrator | Saturday 01 November 2025 14:06:07 +0000 (0:00:01.056) 0:02:26.956 ***** 2025-11-01 14:10:39.099037 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.099042 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.099047 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.099053 | orchestrator | 2025-11-01 14:10:39.099058 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-11-01 14:10:39.099063 | orchestrator | Saturday 01 November 2025 14:06:09 +0000 (0:00:01.885) 0:02:28.841 ***** 2025-11-01 14:10:39.099069 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.099074 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.099079 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.099085 | orchestrator | 2025-11-01 14:10:39.099090 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-11-01 14:10:39.099095 | orchestrator | Saturday 01 November 2025 14:06:10 +0000 (0:00:01.896) 0:02:30.738 ***** 2025-11-01 14:10:39.099101 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.099106 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.099111 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.099117 | orchestrator | 2025-11-01 14:10:39.099122 | orchestrator | TASK [include_role : glance] *************************************************** 2025-11-01 14:10:39.099127 | orchestrator | Saturday 01 November 2025 14:06:11 +0000 (0:00:00.596) 0:02:31.335 ***** 2025-11-01 14:10:39.100064 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:10:39.100090 | orchestrator | 2025-11-01 14:10:39.100097 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-11-01 14:10:39.100102 | orchestrator | Saturday 01 November 2025 14:06:12 +0000 (0:00:00.891) 0:02:32.226 ***** 2025-11-01 14:10:39.100167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-01 14:10:39.100189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-11-01 14:10:39.100234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-01 14:10:39.100249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-11-01 14:10:39.100291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-01 14:10:39.100304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-11-01 14:10:39.100311 | orchestrator | 2025-11-01 14:10:39.100316 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-11-01 14:10:39.100322 | orchestrator | Saturday 01 November 2025 14:06:17 +0000 (0:00:04.628) 0:02:36.854 ***** 2025-11-01 14:10:39.100364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-01 14:10:39.100377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-11-01 14:10:39.100383 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.100391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-01 14:10:39.100436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-11-01 14:10:39.100444 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.100453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-01 14:10:39.100506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-11-01 14:10:39.100520 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.100525 | orchestrator | 2025-11-01 14:10:39.100531 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-11-01 14:10:39.100536 | orchestrator | Saturday 01 November 2025 14:06:20 +0000 (0:00:03.522) 0:02:40.377 ***** 2025-11-01 14:10:39.100542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-11-01 14:10:39.100548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-11-01 14:10:39.100602 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.100611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-11-01 14:10:39.100617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-11-01 14:10:39.100627 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.100632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-11-01 14:10:39.100680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-11-01 14:10:39.100688 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.100693 | orchestrator | 2025-11-01 14:10:39.100699 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-11-01 14:10:39.100704 | orchestrator | Saturday 01 November 2025 14:06:24 +0000 (0:00:03.941) 0:02:44.318 ***** 2025-11-01 14:10:39.100709 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.100715 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.100737 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.100749 | orchestrator | 2025-11-01 14:10:39.100754 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-11-01 14:10:39.100759 | orchestrator | Saturday 01 November 2025 14:06:25 +0000 (0:00:01.359) 0:02:45.678 ***** 2025-11-01 14:10:39.100765 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.100770 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.100776 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.100781 | orchestrator | 2025-11-01 14:10:39.100786 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-11-01 14:10:39.100792 | orchestrator | Saturday 01 November 2025 14:06:28 +0000 (0:00:02.206) 0:02:47.884 ***** 2025-11-01 14:10:39.100797 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.100802 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.100807 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.100812 | orchestrator | 2025-11-01 14:10:39.100818 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-11-01 14:10:39.100823 | orchestrator | Saturday 01 November 2025 14:06:28 +0000 (0:00:00.581) 0:02:48.465 ***** 2025-11-01 14:10:39.100828 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:10:39.100834 | orchestrator | 2025-11-01 14:10:39.100839 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-11-01 14:10:39.100844 | orchestrator | Saturday 01 November 2025 14:06:29 +0000 (0:00:00.861) 0:02:49.327 ***** 2025-11-01 14:10:39.100850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 14:10:39.100866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 14:10:39.100872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 14:10:39.100878 | orchestrator | 2025-11-01 14:10:39.100883 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-11-01 14:10:39.100889 | orchestrator | Saturday 01 November 2025 14:06:33 +0000 (0:00:03.591) 0:02:52.919 ***** 2025-11-01 14:10:39.100933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-01 14:10:39.100941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-01 14:10:39.100946 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.100952 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.100957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-01 14:10:39.100963 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.100968 | orchestrator | 2025-11-01 14:10:39.100982 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-11-01 14:10:39.100992 | orchestrator | Saturday 01 November 2025 14:06:33 +0000 (0:00:00.706) 0:02:53.625 ***** 2025-11-01 14:10:39.100998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-11-01 14:10:39.101004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-11-01 14:10:39.101010 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.101015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-11-01 14:10:39.101020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-11-01 14:10:39.101026 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.101031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-11-01 14:10:39.101037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-11-01 14:10:39.101042 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.101048 | orchestrator | 2025-11-01 14:10:39.101053 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-11-01 14:10:39.101058 | orchestrator | Saturday 01 November 2025 14:06:34 +0000 (0:00:00.822) 0:02:54.448 ***** 2025-11-01 14:10:39.101064 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.101069 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.101074 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.101079 | orchestrator | 2025-11-01 14:10:39.101085 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-11-01 14:10:39.101090 | orchestrator | Saturday 01 November 2025 14:06:36 +0000 (0:00:01.390) 0:02:55.838 ***** 2025-11-01 14:10:39.101095 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.101101 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.101106 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.101111 | orchestrator | 2025-11-01 14:10:39.101116 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-11-01 14:10:39.101122 | orchestrator | Saturday 01 November 2025 14:06:38 +0000 (0:00:02.130) 0:02:57.969 ***** 2025-11-01 14:10:39.101127 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.101132 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.101174 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.101182 | orchestrator | 2025-11-01 14:10:39.101187 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-11-01 14:10:39.101193 | orchestrator | Saturday 01 November 2025 14:06:38 +0000 (0:00:00.549) 0:02:58.519 ***** 2025-11-01 14:10:39.101198 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:10:39.101203 | orchestrator | 2025-11-01 14:10:39.101208 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-11-01 14:10:39.101214 | orchestrator | Saturday 01 November 2025 14:06:39 +0000 (0:00:00.917) 0:02:59.436 ***** 2025-11-01 14:10:39.101234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-01 14:10:39.101283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-01 14:10:39.101295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-01 14:10:39.101305 | orchestrator | 2025-11-01 14:10:39.101311 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-11-01 14:10:39.101316 | orchestrator | Saturday 01 November 2025 14:06:43 +0000 (0:00:04.033) 0:03:03.469 ***** 2025-11-01 14:10:39.101358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-11-01 14:10:39.101370 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.101379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-11-01 14:10:39.101386 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.101427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-11-01 14:10:39.101439 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.101445 | orchestrator | 2025-11-01 14:10:39.101461 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-11-01 14:10:39.101478 | orchestrator | Saturday 01 November 2025 14:06:44 +0000 (0:00:01.214) 0:03:04.684 ***** 2025-11-01 14:10:39.101523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-11-01 14:10:39.101531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-11-01 14:10:39.101539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-11-01 14:10:39.101546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-11-01 14:10:39.101551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-11-01 14:10:39.101557 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.101563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-11-01 14:10:39.101568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-11-01 14:10:39.101574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-11-01 14:10:39.101622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-11-01 14:10:39.101635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-11-01 14:10:39.101641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-11-01 14:10:39.101655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-11-01 14:10:39.101661 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.101667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-11-01 14:10:39.101672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-11-01 14:10:39.101678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-11-01 14:10:39.101683 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.101689 | orchestrator | 2025-11-01 14:10:39.101694 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-11-01 14:10:39.101699 | orchestrator | Saturday 01 November 2025 14:06:45 +0000 (0:00:01.043) 0:03:05.728 ***** 2025-11-01 14:10:39.101705 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.101710 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.101715 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.101721 | orchestrator | 2025-11-01 14:10:39.101725 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-11-01 14:10:39.101733 | orchestrator | Saturday 01 November 2025 14:06:47 +0000 (0:00:01.315) 0:03:07.044 ***** 2025-11-01 14:10:39.101738 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.101742 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.101747 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.101752 | orchestrator | 2025-11-01 14:10:39.101756 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-11-01 14:10:39.101761 | orchestrator | Saturday 01 November 2025 14:06:49 +0000 (0:00:02.184) 0:03:09.229 ***** 2025-11-01 14:10:39.101766 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.101771 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.101775 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.101780 | orchestrator | 2025-11-01 14:10:39.101785 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-11-01 14:10:39.101790 | orchestrator | Saturday 01 November 2025 14:06:49 +0000 (0:00:00.310) 0:03:09.539 ***** 2025-11-01 14:10:39.101794 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.101799 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.101804 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.101808 | orchestrator | 2025-11-01 14:10:39.101813 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-11-01 14:10:39.101821 | orchestrator | Saturday 01 November 2025 14:06:50 +0000 (0:00:00.598) 0:03:10.138 ***** 2025-11-01 14:10:39.101826 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:10:39.101831 | orchestrator | 2025-11-01 14:10:39.101835 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-11-01 14:10:39.101840 | orchestrator | Saturday 01 November 2025 14:06:51 +0000 (0:00:01.022) 0:03:11.160 ***** 2025-11-01 14:10:39.101902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 14:10:39.101911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 14:10:39.101917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 14:10:39.101925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 14:10:39.101930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-01 14:10:39.101941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-01 14:10:39.101980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 14:10:39.101987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 14:10:39.101993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-01 14:10:39.101997 | orchestrator | 2025-11-01 14:10:39.102002 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-11-01 14:10:39.102007 | orchestrator | Saturday 01 November 2025 14:06:55 +0000 (0:00:03.755) 0:03:14.916 ***** 2025-11-01 14:10:39.102033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-01 14:10:39.102044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 14:10:39.102084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-01 14:10:39.102091 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.102096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-01 14:10:39.102102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 14:10:39.102110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-01 14:10:39.102118 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.102124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-01 14:10:39.102160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 14:10:39.102167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-01 14:10:39.102172 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.102177 | orchestrator | 2025-11-01 14:10:39.102182 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-11-01 14:10:39.102187 | orchestrator | Saturday 01 November 2025 14:06:56 +0000 (0:00:01.203) 0:03:16.120 ***** 2025-11-01 14:10:39.102192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-11-01 14:10:39.102197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-11-01 14:10:39.102202 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.102207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-11-01 14:10:39.102212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-11-01 14:10:39.102229 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.102237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-11-01 14:10:39.102242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-11-01 14:10:39.102247 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.102252 | orchestrator | 2025-11-01 14:10:39.102257 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-11-01 14:10:39.102262 | orchestrator | Saturday 01 November 2025 14:06:57 +0000 (0:00:00.878) 0:03:16.999 ***** 2025-11-01 14:10:39.102266 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.102271 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.102276 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.102281 | orchestrator | 2025-11-01 14:10:39.102285 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-11-01 14:10:39.102290 | orchestrator | Saturday 01 November 2025 14:06:58 +0000 (0:00:01.358) 0:03:18.357 ***** 2025-11-01 14:10:39.102295 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.102299 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.102304 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.102309 | orchestrator | 2025-11-01 14:10:39.102314 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-11-01 14:10:39.102318 | orchestrator | Saturday 01 November 2025 14:07:00 +0000 (0:00:02.180) 0:03:20.538 ***** 2025-11-01 14:10:39.102323 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.102328 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.102332 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.102337 | orchestrator | 2025-11-01 14:10:39.102342 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-11-01 14:10:39.102346 | orchestrator | Saturday 01 November 2025 14:07:01 +0000 (0:00:00.580) 0:03:21.118 ***** 2025-11-01 14:10:39.102351 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:10:39.102356 | orchestrator | 2025-11-01 14:10:39.102361 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-11-01 14:10:39.102365 | orchestrator | Saturday 01 November 2025 14:07:02 +0000 (0:00:01.037) 0:03:22.156 ***** 2025-11-01 14:10:39.102403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 14:10:39.102410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.102421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 14:10:39.102428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.102434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 14:10:39.102477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.102502 | orchestrator | 2025-11-01 14:10:39.102507 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-11-01 14:10:39.102512 | orchestrator | Saturday 01 November 2025 14:07:06 +0000 (0:00:04.226) 0:03:26.383 ***** 2025-11-01 14:10:39.102517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-01 14:10:39.102529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.102535 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.102540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-01 14:10:39.102578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.102585 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.102590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-01 14:10:39.102600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.102605 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.102610 | orchestrator | 2025-11-01 14:10:39.102615 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-11-01 14:10:39.102619 | orchestrator | Saturday 01 November 2025 14:07:07 +0000 (0:00:01.044) 0:03:27.427 ***** 2025-11-01 14:10:39.102624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-11-01 14:10:39.102629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-11-01 14:10:39.102634 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.102642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-11-01 14:10:39.102647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-11-01 14:10:39.102652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-11-01 14:10:39.102657 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.102661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-11-01 14:10:39.102666 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.102671 | orchestrator | 2025-11-01 14:10:39.102676 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-11-01 14:10:39.102681 | orchestrator | Saturday 01 November 2025 14:07:08 +0000 (0:00:01.067) 0:03:28.495 ***** 2025-11-01 14:10:39.102685 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.102690 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.102695 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.102699 | orchestrator | 2025-11-01 14:10:39.102712 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-11-01 14:10:39.102717 | orchestrator | Saturday 01 November 2025 14:07:10 +0000 (0:00:01.360) 0:03:29.856 ***** 2025-11-01 14:10:39.102722 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.102726 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.102731 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.102736 | orchestrator | 2025-11-01 14:10:39.102740 | orchestrator | TASK [include_role : manila] *************************************************** 2025-11-01 14:10:39.102745 | orchestrator | Saturday 01 November 2025 14:07:12 +0000 (0:00:02.261) 0:03:32.117 ***** 2025-11-01 14:10:39.102782 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:10:39.102792 | orchestrator | 2025-11-01 14:10:39.102797 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-11-01 14:10:39.102802 | orchestrator | Saturday 01 November 2025 14:07:13 +0000 (0:00:01.418) 0:03:33.535 ***** 2025-11-01 14:10:39.102807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-11-01 14:10:39.102813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-11-01 14:10:39.102818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.102826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.102831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.102867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-11-01 14:10:39.102879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.102884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.102889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.102897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.102902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.102938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.102948 | orchestrator | 2025-11-01 14:10:39.102953 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-11-01 14:10:39.102958 | orchestrator | Saturday 01 November 2025 14:07:17 +0000 (0:00:03.711) 0:03:37.247 ***** 2025-11-01 14:10:39.102963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-11-01 14:10:39.102968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.102973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.102981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.102986 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.102991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-11-01 14:10:39.103031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.103046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-11-01 14:10:39.103052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.103057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.103064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.103069 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.103074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.103117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.103124 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.103129 | orchestrator | 2025-11-01 14:10:39.103134 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-11-01 14:10:39.103138 | orchestrator | Saturday 01 November 2025 14:07:18 +0000 (0:00:00.913) 0:03:38.161 ***** 2025-11-01 14:10:39.103143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-11-01 14:10:39.103148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-11-01 14:10:39.103153 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.103158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-11-01 14:10:39.103163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-11-01 14:10:39.103168 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.103173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-11-01 14:10:39.103177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-11-01 14:10:39.103189 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.103194 | orchestrator | 2025-11-01 14:10:39.103199 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-11-01 14:10:39.103204 | orchestrator | Saturday 01 November 2025 14:07:19 +0000 (0:00:01.253) 0:03:39.415 ***** 2025-11-01 14:10:39.103209 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.103214 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.103218 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.103223 | orchestrator | 2025-11-01 14:10:39.103228 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-11-01 14:10:39.103233 | orchestrator | Saturday 01 November 2025 14:07:21 +0000 (0:00:01.568) 0:03:40.983 ***** 2025-11-01 14:10:39.103238 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.103242 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.103247 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.103252 | orchestrator | 2025-11-01 14:10:39.103257 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-11-01 14:10:39.103261 | orchestrator | Saturday 01 November 2025 14:07:23 +0000 (0:00:02.159) 0:03:43.142 ***** 2025-11-01 14:10:39.103270 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:10:39.103275 | orchestrator | 2025-11-01 14:10:39.103282 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-11-01 14:10:39.103287 | orchestrator | Saturday 01 November 2025 14:07:24 +0000 (0:00:01.375) 0:03:44.518 ***** 2025-11-01 14:10:39.103292 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-11-01 14:10:39.103297 | orchestrator | 2025-11-01 14:10:39.103302 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-11-01 14:10:39.103307 | orchestrator | Saturday 01 November 2025 14:07:27 +0000 (0:00:03.188) 0:03:47.706 ***** 2025-11-01 14:10:39.103344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-01 14:10:39.103352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-11-01 14:10:39.103357 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.103365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-01 14:10:39.103373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-11-01 14:10:39.103378 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.103415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-01 14:10:39.103423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-11-01 14:10:39.103431 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.103436 | orchestrator | 2025-11-01 14:10:39.103441 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-11-01 14:10:39.103445 | orchestrator | Saturday 01 November 2025 14:07:30 +0000 (0:00:02.222) 0:03:49.929 ***** 2025-11-01 14:10:39.103454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-01 14:10:39.103497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-11-01 14:10:39.103504 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.103509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-01 14:10:39.103521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-11-01 14:10:39.103526 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.103572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-01 14:10:39.103580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-11-01 14:10:39.103585 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.103590 | orchestrator | 2025-11-01 14:10:39.103595 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-11-01 14:10:39.103605 | orchestrator | Saturday 01 November 2025 14:07:32 +0000 (0:00:02.478) 0:03:52.408 ***** 2025-11-01 14:10:39.103611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-11-01 14:10:39.103618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-11-01 14:10:39.103624 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.103628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-11-01 14:10:39.103634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-11-01 14:10:39.103638 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.103674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-11-01 14:10:39.103680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-11-01 14:10:39.103685 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.103690 | orchestrator | 2025-11-01 14:10:39.103707 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-11-01 14:10:39.103712 | orchestrator | Saturday 01 November 2025 14:07:35 +0000 (0:00:03.084) 0:03:55.492 ***** 2025-11-01 14:10:39.103718 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.103722 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.103727 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.103732 | orchestrator | 2025-11-01 14:10:39.103737 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-11-01 14:10:39.103742 | orchestrator | Saturday 01 November 2025 14:07:37 +0000 (0:00:02.026) 0:03:57.519 ***** 2025-11-01 14:10:39.103747 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.103752 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.103757 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.103761 | orchestrator | 2025-11-01 14:10:39.103766 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-11-01 14:10:39.103771 | orchestrator | Saturday 01 November 2025 14:07:39 +0000 (0:00:01.540) 0:03:59.060 ***** 2025-11-01 14:10:39.103776 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.103781 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.103786 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.103791 | orchestrator | 2025-11-01 14:10:39.103796 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-11-01 14:10:39.103801 | orchestrator | Saturday 01 November 2025 14:07:39 +0000 (0:00:00.385) 0:03:59.445 ***** 2025-11-01 14:10:39.103806 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:10:39.103810 | orchestrator | 2025-11-01 14:10:39.103815 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-11-01 14:10:39.103820 | orchestrator | Saturday 01 November 2025 14:07:41 +0000 (0:00:01.447) 0:04:00.893 ***** 2025-11-01 14:10:39.103828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-11-01 14:10:39.103839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-11-01 14:10:39.103877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-11-01 14:10:39.103888 | orchestrator | 2025-11-01 14:10:39.103893 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-11-01 14:10:39.103897 | orchestrator | Saturday 01 November 2025 14:07:42 +0000 (0:00:01.432) 0:04:02.326 ***** 2025-11-01 14:10:39.103902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-11-01 14:10:39.103907 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.103912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-11-01 14:10:39.103917 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.103925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-11-01 14:10:39.103930 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.103934 | orchestrator | 2025-11-01 14:10:39.103939 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-11-01 14:10:39.103944 | orchestrator | Saturday 01 November 2025 14:07:42 +0000 (0:00:00.367) 0:04:02.693 ***** 2025-11-01 14:10:39.103949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-11-01 14:10:39.103955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-11-01 14:10:39.103960 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.103964 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.103999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-11-01 14:10:39.104009 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.104014 | orchestrator | 2025-11-01 14:10:39.104018 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-11-01 14:10:39.104023 | orchestrator | Saturday 01 November 2025 14:07:43 +0000 (0:00:00.778) 0:04:03.471 ***** 2025-11-01 14:10:39.104028 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.104032 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.104037 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.104042 | orchestrator | 2025-11-01 14:10:39.104047 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-11-01 14:10:39.104051 | orchestrator | Saturday 01 November 2025 14:07:44 +0000 (0:00:00.422) 0:04:03.894 ***** 2025-11-01 14:10:39.104056 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.104061 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.104065 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.104070 | orchestrator | 2025-11-01 14:10:39.104075 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-11-01 14:10:39.104080 | orchestrator | Saturday 01 November 2025 14:07:45 +0000 (0:00:01.223) 0:04:05.117 ***** 2025-11-01 14:10:39.104084 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.104089 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.104094 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.104098 | orchestrator | 2025-11-01 14:10:39.104103 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-11-01 14:10:39.104108 | orchestrator | Saturday 01 November 2025 14:07:45 +0000 (0:00:00.333) 0:04:05.451 ***** 2025-11-01 14:10:39.104113 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:10:39.104117 | orchestrator | 2025-11-01 14:10:39.104122 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-11-01 14:10:39.104127 | orchestrator | Saturday 01 November 2025 14:07:46 +0000 (0:00:01.302) 0:04:06.753 ***** 2025-11-01 14:10:39.104132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 14:10:39.104139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 14:10:39.104205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-11-01 14:10:39.104210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 14:10:39.104268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 14:10:39.104281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-11-01 14:10:39.104302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:10:39.104345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 14:10:39.104355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 14:10:39.104363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-11-01 14:10:39.104368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:10:39.104411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 14:10:39.104418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 14:10:39.104432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-11-01 14:10:39.104494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-11-01 14:10:39.104503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 14:10:39.104508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:10:39.104513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-11-01 14:10:39.104565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:10:39.104578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-11-01 14:10:39.104595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 14:10:39.104612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 14:10:39.104649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:10:39.104661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-11-01 14:10:39.104678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 14:10:39.104684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-11-01 14:10:39.104725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:10:39.104730 | orchestrator | 2025-11-01 14:10:39.104735 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-11-01 14:10:39.104740 | orchestrator | Saturday 01 November 2025 14:07:50 +0000 (0:00:03.806) 0:04:10.560 ***** 2025-11-01 14:10:39.104745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 14:10:39.104756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-11-01 14:10:39.104818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 14:10:39.104834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 14:10:39.104839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:10:39.104880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-11-01 14:10:39.104892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 14:10:39.104903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 14:10:39.104908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-11-01 14:10:39.104964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.104982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:10:39.104987 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.104992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-11-01 14:10:39.105027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.105034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 14:10:39.105039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 14:10:39.105047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 14:10:39.105054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.105059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.105102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.105110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:10:39.105118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.105123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.105131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-11-01 14:10:39.105136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-11-01 14:10:39.105171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.105178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 14:10:39.105188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 14:10:39.105193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.105198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 14:10:39.105205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-11-01 14:10:39.105210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.105237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:10:39.105246 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.105251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:10:39.105257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.105264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-11-01 14:10:39.105269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 14:10:39.105274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.105292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-11-01 14:10:39.105301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:10:39.105306 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.105311 | orchestrator | 2025-11-01 14:10:39.105315 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-11-01 14:10:39.105320 | orchestrator | Saturday 01 November 2025 14:07:52 +0000 (0:00:01.410) 0:04:11.970 ***** 2025-11-01 14:10:39.105325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-11-01 14:10:39.105330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-11-01 14:10:39.105335 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.105340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-11-01 14:10:39.105344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-11-01 14:10:39.105355 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.105360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-11-01 14:10:39.105367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-11-01 14:10:39.105372 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.105377 | orchestrator | 2025-11-01 14:10:39.105382 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-11-01 14:10:39.105387 | orchestrator | Saturday 01 November 2025 14:07:53 +0000 (0:00:01.703) 0:04:13.673 ***** 2025-11-01 14:10:39.105391 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.105396 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.105401 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.105406 | orchestrator | 2025-11-01 14:10:39.105410 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-11-01 14:10:39.105415 | orchestrator | Saturday 01 November 2025 14:07:55 +0000 (0:00:01.314) 0:04:14.988 ***** 2025-11-01 14:10:39.105420 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.105425 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.105429 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.105434 | orchestrator | 2025-11-01 14:10:39.105439 | orchestrator | TASK [include_role : placement] ************************************************ 2025-11-01 14:10:39.105443 | orchestrator | Saturday 01 November 2025 14:07:57 +0000 (0:00:01.982) 0:04:16.971 ***** 2025-11-01 14:10:39.105448 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:10:39.105453 | orchestrator | 2025-11-01 14:10:39.105458 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-11-01 14:10:39.105464 | orchestrator | Saturday 01 November 2025 14:07:58 +0000 (0:00:01.188) 0:04:18.160 ***** 2025-11-01 14:10:39.105517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 14:10:39.105525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 14:10:39.105530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 14:10:39.105535 | orchestrator | 2025-11-01 14:10:39.105540 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-11-01 14:10:39.105544 | orchestrator | Saturday 01 November 2025 14:08:02 +0000 (0:00:03.678) 0:04:21.838 ***** 2025-11-01 14:10:39.105552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-01 14:10:39.105561 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.105579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-01 14:10:39.105585 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.105590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-01 14:10:39.105595 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.105600 | orchestrator | 2025-11-01 14:10:39.105605 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-11-01 14:10:39.105609 | orchestrator | Saturday 01 November 2025 14:08:02 +0000 (0:00:00.521) 0:04:22.360 ***** 2025-11-01 14:10:39.105614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-11-01 14:10:39.105619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-11-01 14:10:39.105624 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.105629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-11-01 14:10:39.105634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-11-01 14:10:39.105639 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.105644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-11-01 14:10:39.105651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-11-01 14:10:39.105656 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.105665 | orchestrator | 2025-11-01 14:10:39.105670 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-11-01 14:10:39.105675 | orchestrator | Saturday 01 November 2025 14:08:03 +0000 (0:00:00.779) 0:04:23.139 ***** 2025-11-01 14:10:39.105679 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.105684 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.105689 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.105693 | orchestrator | 2025-11-01 14:10:39.105698 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-11-01 14:10:39.105702 | orchestrator | Saturday 01 November 2025 14:08:05 +0000 (0:00:01.998) 0:04:25.138 ***** 2025-11-01 14:10:39.105707 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.105711 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.105716 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.105720 | orchestrator | 2025-11-01 14:10:39.105724 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-11-01 14:10:39.105729 | orchestrator | Saturday 01 November 2025 14:08:07 +0000 (0:00:01.926) 0:04:27.064 ***** 2025-11-01 14:10:39.105733 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:10:39.105738 | orchestrator | 2025-11-01 14:10:39.105742 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-11-01 14:10:39.105747 | orchestrator | Saturday 01 November 2025 14:08:08 +0000 (0:00:01.651) 0:04:28.716 ***** 2025-11-01 14:10:39.105764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 14:10:39.105770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.105775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.105782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 14:10:39.105790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.105807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.105813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 14:10:39.105819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.105829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.105835 | orchestrator | 2025-11-01 14:10:39.105840 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-11-01 14:10:39.105845 | orchestrator | Saturday 01 November 2025 14:08:13 +0000 (0:00:04.099) 0:04:32.816 ***** 2025-11-01 14:10:39.105862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-01 14:10:39.105869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.105874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.105880 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.105888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-01 14:10:39.105897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.105902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.105907 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.105925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-01 14:10:39.105931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.105937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.105945 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.105950 | orchestrator | 2025-11-01 14:10:39.105955 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-11-01 14:10:39.105960 | orchestrator | Saturday 01 November 2025 14:08:14 +0000 (0:00:00.977) 0:04:33.793 ***** 2025-11-01 14:10:39.105967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-11-01 14:10:39.105973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-11-01 14:10:39.105979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-11-01 14:10:39.105984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-11-01 14:10:39.105989 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.105994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-11-01 14:10:39.105999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-11-01 14:10:39.106005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-11-01 14:10:39.106010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-11-01 14:10:39.106053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-11-01 14:10:39.106060 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.106065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-11-01 14:10:39.106071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-11-01 14:10:39.106076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-11-01 14:10:39.106082 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.106087 | orchestrator | 2025-11-01 14:10:39.106096 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-11-01 14:10:39.106101 | orchestrator | Saturday 01 November 2025 14:08:14 +0000 (0:00:00.835) 0:04:34.629 ***** 2025-11-01 14:10:39.106106 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.106111 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.106116 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.106121 | orchestrator | 2025-11-01 14:10:39.106126 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-11-01 14:10:39.106132 | orchestrator | Saturday 01 November 2025 14:08:16 +0000 (0:00:01.321) 0:04:35.951 ***** 2025-11-01 14:10:39.106137 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.106142 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.106147 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.106152 | orchestrator | 2025-11-01 14:10:39.106157 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-11-01 14:10:39.106162 | orchestrator | Saturday 01 November 2025 14:08:18 +0000 (0:00:01.978) 0:04:37.930 ***** 2025-11-01 14:10:39.106167 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:10:39.106172 | orchestrator | 2025-11-01 14:10:39.106178 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-11-01 14:10:39.106182 | orchestrator | Saturday 01 November 2025 14:08:19 +0000 (0:00:01.445) 0:04:39.375 ***** 2025-11-01 14:10:39.106187 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-11-01 14:10:39.106192 | orchestrator | 2025-11-01 14:10:39.106196 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-11-01 14:10:39.106201 | orchestrator | Saturday 01 November 2025 14:08:20 +0000 (0:00:00.780) 0:04:40.155 ***** 2025-11-01 14:10:39.106208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-11-01 14:10:39.106213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-11-01 14:10:39.106218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-11-01 14:10:39.106223 | orchestrator | 2025-11-01 14:10:39.106227 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-11-01 14:10:39.106232 | orchestrator | Saturday 01 November 2025 14:08:24 +0000 (0:00:04.031) 0:04:44.187 ***** 2025-11-01 14:10:39.106248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-01 14:10:39.106257 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.106262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-01 14:10:39.106267 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.106272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-01 14:10:39.106276 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.106281 | orchestrator | 2025-11-01 14:10:39.106285 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-11-01 14:10:39.106290 | orchestrator | Saturday 01 November 2025 14:08:25 +0000 (0:00:00.985) 0:04:45.172 ***** 2025-11-01 14:10:39.106294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-11-01 14:10:39.106299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-11-01 14:10:39.106304 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.106309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-11-01 14:10:39.106316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-11-01 14:10:39.106321 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.106326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-11-01 14:10:39.106331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-11-01 14:10:39.106335 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.106340 | orchestrator | 2025-11-01 14:10:39.106344 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-11-01 14:10:39.106349 | orchestrator | Saturday 01 November 2025 14:08:26 +0000 (0:00:01.465) 0:04:46.638 ***** 2025-11-01 14:10:39.106353 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.106358 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.106362 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.106372 | orchestrator | 2025-11-01 14:10:39.106376 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-11-01 14:10:39.106381 | orchestrator | Saturday 01 November 2025 14:08:29 +0000 (0:00:02.599) 0:04:49.238 ***** 2025-11-01 14:10:39.106385 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.106390 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.106394 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.106399 | orchestrator | 2025-11-01 14:10:39.106403 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-11-01 14:10:39.106407 | orchestrator | Saturday 01 November 2025 14:08:32 +0000 (0:00:03.182) 0:04:52.421 ***** 2025-11-01 14:10:39.106423 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-11-01 14:10:39.106429 | orchestrator | 2025-11-01 14:10:39.106433 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-11-01 14:10:39.106438 | orchestrator | Saturday 01 November 2025 14:08:34 +0000 (0:00:01.538) 0:04:53.959 ***** 2025-11-01 14:10:39.106443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-01 14:10:39.106447 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.106452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-01 14:10:39.106457 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.106461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-01 14:10:39.106466 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.106471 | orchestrator | 2025-11-01 14:10:39.106475 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-11-01 14:10:39.106480 | orchestrator | Saturday 01 November 2025 14:08:35 +0000 (0:00:01.402) 0:04:55.361 ***** 2025-11-01 14:10:39.106499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-01 14:10:39.106504 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.106509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-01 14:10:39.106519 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.106524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-01 14:10:39.106528 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.106533 | orchestrator | 2025-11-01 14:10:39.106537 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-11-01 14:10:39.106542 | orchestrator | Saturday 01 November 2025 14:08:36 +0000 (0:00:01.324) 0:04:56.686 ***** 2025-11-01 14:10:39.106546 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.106551 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.106555 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.106560 | orchestrator | 2025-11-01 14:10:39.106576 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-11-01 14:10:39.106582 | orchestrator | Saturday 01 November 2025 14:08:38 +0000 (0:00:01.942) 0:04:58.628 ***** 2025-11-01 14:10:39.106586 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:10:39.106591 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:10:39.106595 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:10:39.106600 | orchestrator | 2025-11-01 14:10:39.106605 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-11-01 14:10:39.106609 | orchestrator | Saturday 01 November 2025 14:08:41 +0000 (0:00:02.438) 0:05:01.066 ***** 2025-11-01 14:10:39.106613 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:10:39.106618 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:10:39.106622 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:10:39.106627 | orchestrator | 2025-11-01 14:10:39.106631 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-11-01 14:10:39.106636 | orchestrator | Saturday 01 November 2025 14:08:44 +0000 (0:00:03.128) 0:05:04.194 ***** 2025-11-01 14:10:39.106640 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-11-01 14:10:39.106645 | orchestrator | 2025-11-01 14:10:39.106649 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-11-01 14:10:39.106654 | orchestrator | Saturday 01 November 2025 14:08:45 +0000 (0:00:00.865) 0:05:05.060 ***** 2025-11-01 14:10:39.106659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-11-01 14:10:39.106663 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.106668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-11-01 14:10:39.106676 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.106684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-11-01 14:10:39.106688 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.106693 | orchestrator | 2025-11-01 14:10:39.106697 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-11-01 14:10:39.106702 | orchestrator | Saturday 01 November 2025 14:08:46 +0000 (0:00:01.265) 0:05:06.326 ***** 2025-11-01 14:10:39.106707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-11-01 14:10:39.106711 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.106716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-11-01 14:10:39.106721 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.106737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-11-01 14:10:39.106742 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.106747 | orchestrator | 2025-11-01 14:10:39.106751 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-11-01 14:10:39.106756 | orchestrator | Saturday 01 November 2025 14:08:47 +0000 (0:00:01.190) 0:05:07.516 ***** 2025-11-01 14:10:39.106760 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.106765 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.106769 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.106774 | orchestrator | 2025-11-01 14:10:39.106778 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-11-01 14:10:39.106783 | orchestrator | Saturday 01 November 2025 14:08:49 +0000 (0:00:01.363) 0:05:08.880 ***** 2025-11-01 14:10:39.106787 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:10:39.106791 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:10:39.106796 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:10:39.106800 | orchestrator | 2025-11-01 14:10:39.106805 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-11-01 14:10:39.106813 | orchestrator | Saturday 01 November 2025 14:08:51 +0000 (0:00:02.236) 0:05:11.116 ***** 2025-11-01 14:10:39.106818 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:10:39.106822 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:10:39.106827 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:10:39.106831 | orchestrator | 2025-11-01 14:10:39.106836 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-11-01 14:10:39.106840 | orchestrator | Saturday 01 November 2025 14:08:54 +0000 (0:00:03.013) 0:05:14.130 ***** 2025-11-01 14:10:39.106845 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:10:39.106849 | orchestrator | 2025-11-01 14:10:39.106854 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-11-01 14:10:39.106858 | orchestrator | Saturday 01 November 2025 14:08:55 +0000 (0:00:01.453) 0:05:15.584 ***** 2025-11-01 14:10:39.106865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 14:10:39.106870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-01 14:10:39.106875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-01 14:10:39.106891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-01 14:10:39.106897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.106905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 14:10:39.106910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-01 14:10:39.106914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-01 14:10:39.106919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-01 14:10:39.106935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 14:10:39.106941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.106948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-01 14:10:39.106953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-01 14:10:39.106995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-01 14:10:39.107006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.107011 | orchestrator | 2025-11-01 14:10:39.107016 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-11-01 14:10:39.107020 | orchestrator | Saturday 01 November 2025 14:08:59 +0000 (0:00:03.228) 0:05:18.812 ***** 2025-11-01 14:10:39.107039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-01 14:10:39.107049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-01 14:10:39.107054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-01 14:10:39.107058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-01 14:10:39.107065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.107070 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.107075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-01 14:10:39.107091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-01 14:10:39.107097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-01 14:10:39.107105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-01 14:10:39.107109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-01 14:10:39.107117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-01 14:10:39.107122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-01 14:10:39.107126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.107131 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.107148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-01 14:10:39.107157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:10:39.107162 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.107166 | orchestrator | 2025-11-01 14:10:39.107171 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-11-01 14:10:39.107175 | orchestrator | Saturday 01 November 2025 14:08:59 +0000 (0:00:00.715) 0:05:19.527 ***** 2025-11-01 14:10:39.107180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-11-01 14:10:39.107185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-11-01 14:10:39.107190 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.107194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-11-01 14:10:39.107199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-11-01 14:10:39.107203 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.107208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-11-01 14:10:39.107215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-11-01 14:10:39.107220 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.107224 | orchestrator | 2025-11-01 14:10:39.107229 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-11-01 14:10:39.107233 | orchestrator | Saturday 01 November 2025 14:09:01 +0000 (0:00:01.319) 0:05:20.847 ***** 2025-11-01 14:10:39.107238 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.107242 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.107247 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.107251 | orchestrator | 2025-11-01 14:10:39.107256 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-11-01 14:10:39.107260 | orchestrator | Saturday 01 November 2025 14:09:02 +0000 (0:00:01.399) 0:05:22.246 ***** 2025-11-01 14:10:39.107265 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.107269 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.107273 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.107282 | orchestrator | 2025-11-01 14:10:39.107286 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-11-01 14:10:39.107291 | orchestrator | Saturday 01 November 2025 14:09:04 +0000 (0:00:02.207) 0:05:24.453 ***** 2025-11-01 14:10:39.107295 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:10:39.107299 | orchestrator | 2025-11-01 14:10:39.107304 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-11-01 14:10:39.107308 | orchestrator | Saturday 01 November 2025 14:09:06 +0000 (0:00:01.533) 0:05:25.987 ***** 2025-11-01 14:10:39.107325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-01 14:10:39.107331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-01 14:10:39.107336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-01 14:10:39.107344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-01 14:10:39.107365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-01 14:10:39.107371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-01 14:10:39.107376 | orchestrator | 2025-11-01 14:10:39.107381 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-11-01 14:10:39.107385 | orchestrator | Saturday 01 November 2025 14:09:12 +0000 (0:00:05.970) 0:05:31.957 ***** 2025-11-01 14:10:39.107390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-01 14:10:39.107398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-01 14:10:39.107406 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.107411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-01 14:10:39.107427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-01 14:10:39.107433 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.107438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-01 14:10:39.107445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-01 14:10:39.107455 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.107460 | orchestrator | 2025-11-01 14:10:39.107464 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-11-01 14:10:39.107469 | orchestrator | Saturday 01 November 2025 14:09:12 +0000 (0:00:00.702) 0:05:32.660 ***** 2025-11-01 14:10:39.107473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-11-01 14:10:39.107478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-11-01 14:10:39.107495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-11-01 14:10:39.107500 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.107504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-11-01 14:10:39.107521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-11-01 14:10:39.107526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-11-01 14:10:39.107531 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.107536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-11-01 14:10:39.107540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-11-01 14:10:39.107545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-11-01 14:10:39.107550 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.107554 | orchestrator | 2025-11-01 14:10:39.107559 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-11-01 14:10:39.107564 | orchestrator | Saturday 01 November 2025 14:09:13 +0000 (0:00:01.024) 0:05:33.684 ***** 2025-11-01 14:10:39.107568 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.107573 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.107577 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.107582 | orchestrator | 2025-11-01 14:10:39.107586 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-11-01 14:10:39.107591 | orchestrator | Saturday 01 November 2025 14:09:14 +0000 (0:00:00.846) 0:05:34.530 ***** 2025-11-01 14:10:39.107595 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.107600 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.107608 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.107612 | orchestrator | 2025-11-01 14:10:39.107617 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-11-01 14:10:39.107621 | orchestrator | Saturday 01 November 2025 14:09:16 +0000 (0:00:01.276) 0:05:35.807 ***** 2025-11-01 14:10:39.107626 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:10:39.107630 | orchestrator | 2025-11-01 14:10:39.107635 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-11-01 14:10:39.107639 | orchestrator | Saturday 01 November 2025 14:09:17 +0000 (0:00:01.370) 0:05:37.178 ***** 2025-11-01 14:10:39.107647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-11-01 14:10:39.107652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 14:10:39.107657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:10:39.107673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:10:39.107679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 14:10:39.107684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-11-01 14:10:39.107693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 14:10:39.107700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:10:39.107705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:10:39.107710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 14:10:39.107727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-11-01 14:10:39.107732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 14:10:39.107737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:10:39.107745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:10:39.107753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 14:10:39.107758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-11-01 14:10:39.107765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-11-01 14:10:39.107770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:10:39.107775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:10:39.107783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-01 14:10:39.107790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-11-01 14:10:39.107796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-11-01 14:10:39.107803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:10:39.107808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:10:39.107813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-01 14:10:39.107821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-11-01 14:10:39.107829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-11-01 14:10:39.107834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:10:39.107839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:10:39.107846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-01 14:10:39.107850 | orchestrator | 2025-11-01 14:10:39.107855 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-11-01 14:10:39.107860 | orchestrator | Saturday 01 November 2025 14:09:21 +0000 (0:00:04.228) 0:05:41.406 ***** 2025-11-01 14:10:39.107864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-11-01 14:10:39.107873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 14:10:39.107877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:10:39.107885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:10:39.107890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 14:10:39.107897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-11-01 14:10:39.107902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-11-01 14:10:39.107910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:10:39.107914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:10:39.107922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-11-01 14:10:39.107926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-01 14:10:39.107931 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.107936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 14:10:39.107943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:10:39.107952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:10:39.107957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 14:10:39.107961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-11-01 14:10:39.107969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-11-01 14:10:39.107974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:10:39.107981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:10:39.107988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-01 14:10:39.107993 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.107998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-11-01 14:10:39.108003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 14:10:39.108007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:10:39.108014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:10:39.108019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 14:10:39.108026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-11-01 14:10:39.108036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-11-01 14:10:39.108041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:10:39.108046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:10:39.108053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-01 14:10:39.108058 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.108062 | orchestrator | 2025-11-01 14:10:39.108067 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-11-01 14:10:39.108071 | orchestrator | Saturday 01 November 2025 14:09:22 +0000 (0:00:01.203) 0:05:42.610 ***** 2025-11-01 14:10:39.108076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-11-01 14:10:39.108081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-11-01 14:10:39.108086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-11-01 14:10:39.108094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-11-01 14:10:39.108099 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.108104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-11-01 14:10:39.108111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-11-01 14:10:39.108116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-11-01 14:10:39.108120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-11-01 14:10:39.108125 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.108130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-11-01 14:10:39.108134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-11-01 14:10:39.108139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-11-01 14:10:39.108144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-11-01 14:10:39.108148 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.108153 | orchestrator | 2025-11-01 14:10:39.108157 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-11-01 14:10:39.108162 | orchestrator | Saturday 01 November 2025 14:09:23 +0000 (0:00:00.947) 0:05:43.557 ***** 2025-11-01 14:10:39.108166 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.108171 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.108176 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.108180 | orchestrator | 2025-11-01 14:10:39.108185 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-11-01 14:10:39.108189 | orchestrator | Saturday 01 November 2025 14:09:24 +0000 (0:00:00.400) 0:05:43.958 ***** 2025-11-01 14:10:39.108194 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.108201 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.108205 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.108210 | orchestrator | 2025-11-01 14:10:39.108214 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-11-01 14:10:39.108221 | orchestrator | Saturday 01 November 2025 14:09:25 +0000 (0:00:01.327) 0:05:45.285 ***** 2025-11-01 14:10:39.108229 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:10:39.108233 | orchestrator | 2025-11-01 14:10:39.108238 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-11-01 14:10:39.108242 | orchestrator | Saturday 01 November 2025 14:09:27 +0000 (0:00:01.694) 0:05:46.980 ***** 2025-11-01 14:10:39.108247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-01 14:10:39.108254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-01 14:10:39.108260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-01 14:10:39.108265 | orchestrator | 2025-11-01 14:10:39.108269 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-11-01 14:10:39.108274 | orchestrator | Saturday 01 November 2025 14:09:29 +0000 (0:00:02.446) 0:05:49.427 ***** 2025-11-01 14:10:39.108281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-11-01 14:10:39.108289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-11-01 14:10:39.108295 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.108299 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.108306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-11-01 14:10:39.108311 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.108316 | orchestrator | 2025-11-01 14:10:39.108320 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-11-01 14:10:39.108325 | orchestrator | Saturday 01 November 2025 14:09:30 +0000 (0:00:00.448) 0:05:49.875 ***** 2025-11-01 14:10:39.108329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-11-01 14:10:39.108334 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.108339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-11-01 14:10:39.108343 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.108348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-11-01 14:10:39.108352 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.108357 | orchestrator | 2025-11-01 14:10:39.108361 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-11-01 14:10:39.108366 | orchestrator | Saturday 01 November 2025 14:09:31 +0000 (0:00:01.141) 0:05:51.017 ***** 2025-11-01 14:10:39.108370 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.108378 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.108382 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.108387 | orchestrator | 2025-11-01 14:10:39.108391 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-11-01 14:10:39.108396 | orchestrator | Saturday 01 November 2025 14:09:31 +0000 (0:00:00.491) 0:05:51.509 ***** 2025-11-01 14:10:39.108400 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.108405 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.108409 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.108414 | orchestrator | 2025-11-01 14:10:39.108418 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-11-01 14:10:39.108423 | orchestrator | Saturday 01 November 2025 14:09:33 +0000 (0:00:01.431) 0:05:52.940 ***** 2025-11-01 14:10:39.108427 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:10:39.108432 | orchestrator | 2025-11-01 14:10:39.108439 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-11-01 14:10:39.108443 | orchestrator | Saturday 01 November 2025 14:09:35 +0000 (0:00:01.909) 0:05:54.849 ***** 2025-11-01 14:10:39.108448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-11-01 14:10:39.108455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-11-01 14:10:39.108460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-11-01 14:10:39.108466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-11-01 14:10:39.108476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-11-01 14:10:39.108492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-11-01 14:10:39.108497 | orchestrator | 2025-11-01 14:10:39.108504 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-11-01 14:10:39.108509 | orchestrator | Saturday 01 November 2025 14:09:41 +0000 (0:00:06.464) 0:06:01.314 ***** 2025-11-01 14:10:39.108514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-11-01 14:10:39.108518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-11-01 14:10:39.108528 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.108535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-11-01 14:10:39.108540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-11-01 14:10:39.108545 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.108552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-11-01 14:10:39.108557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-11-01 14:10:39.108565 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.108570 | orchestrator | 2025-11-01 14:10:39.108574 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-11-01 14:10:39.108579 | orchestrator | Saturday 01 November 2025 14:09:42 +0000 (0:00:00.680) 0:06:01.994 ***** 2025-11-01 14:10:39.108583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-11-01 14:10:39.108588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-11-01 14:10:39.108595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-11-01 14:10:39.108600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-11-01 14:10:39.108605 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.108609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-11-01 14:10:39.108614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-11-01 14:10:39.108618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-11-01 14:10:39.108623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-11-01 14:10:39.108628 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.108632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-11-01 14:10:39.108639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-11-01 14:10:39.108643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-11-01 14:10:39.108648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-11-01 14:10:39.108656 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.108661 | orchestrator | 2025-11-01 14:10:39.108665 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-11-01 14:10:39.108670 | orchestrator | Saturday 01 November 2025 14:09:44 +0000 (0:00:01.794) 0:06:03.789 ***** 2025-11-01 14:10:39.108674 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.108679 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.108683 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.108688 | orchestrator | 2025-11-01 14:10:39.108692 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-11-01 14:10:39.108697 | orchestrator | Saturday 01 November 2025 14:09:45 +0000 (0:00:01.413) 0:06:05.203 ***** 2025-11-01 14:10:39.108701 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.108705 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.108710 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.108715 | orchestrator | 2025-11-01 14:10:39.108719 | orchestrator | TASK [include_role : swift] **************************************************** 2025-11-01 14:10:39.108724 | orchestrator | Saturday 01 November 2025 14:09:47 +0000 (0:00:02.451) 0:06:07.655 ***** 2025-11-01 14:10:39.108728 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.108733 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.108737 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.108742 | orchestrator | 2025-11-01 14:10:39.108746 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-11-01 14:10:39.108751 | orchestrator | Saturday 01 November 2025 14:09:48 +0000 (0:00:00.374) 0:06:08.030 ***** 2025-11-01 14:10:39.108755 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.108760 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.108764 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.108768 | orchestrator | 2025-11-01 14:10:39.108773 | orchestrator | TASK [include_role : trove] **************************************************** 2025-11-01 14:10:39.108777 | orchestrator | Saturday 01 November 2025 14:09:48 +0000 (0:00:00.332) 0:06:08.362 ***** 2025-11-01 14:10:39.108782 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.108786 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.108791 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.108795 | orchestrator | 2025-11-01 14:10:39.108800 | orchestrator | TASK [include_role : venus] **************************************************** 2025-11-01 14:10:39.108804 | orchestrator | Saturday 01 November 2025 14:09:49 +0000 (0:00:00.738) 0:06:09.101 ***** 2025-11-01 14:10:39.108809 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.108813 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.108818 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.108822 | orchestrator | 2025-11-01 14:10:39.108826 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-11-01 14:10:39.108831 | orchestrator | Saturday 01 November 2025 14:09:49 +0000 (0:00:00.406) 0:06:09.507 ***** 2025-11-01 14:10:39.108835 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.108842 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.108847 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.108851 | orchestrator | 2025-11-01 14:10:39.108856 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-11-01 14:10:39.108860 | orchestrator | Saturday 01 November 2025 14:09:50 +0000 (0:00:00.450) 0:06:09.957 ***** 2025-11-01 14:10:39.108865 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.108869 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.108874 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.108878 | orchestrator | 2025-11-01 14:10:39.108883 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-11-01 14:10:39.108887 | orchestrator | Saturday 01 November 2025 14:09:51 +0000 (0:00:01.070) 0:06:11.028 ***** 2025-11-01 14:10:39.108892 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:10:39.108899 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:10:39.108904 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:10:39.108908 | orchestrator | 2025-11-01 14:10:39.108913 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-11-01 14:10:39.108917 | orchestrator | Saturday 01 November 2025 14:09:51 +0000 (0:00:00.729) 0:06:11.757 ***** 2025-11-01 14:10:39.108921 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:10:39.108926 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:10:39.108931 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:10:39.108935 | orchestrator | 2025-11-01 14:10:39.108940 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-11-01 14:10:39.108944 | orchestrator | Saturday 01 November 2025 14:09:52 +0000 (0:00:00.397) 0:06:12.155 ***** 2025-11-01 14:10:39.108949 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:10:39.108953 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:10:39.108958 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:10:39.108962 | orchestrator | 2025-11-01 14:10:39.108966 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-11-01 14:10:39.108971 | orchestrator | Saturday 01 November 2025 14:09:53 +0000 (0:00:00.929) 0:06:13.084 ***** 2025-11-01 14:10:39.108975 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:10:39.108980 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:10:39.108984 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:10:39.108989 | orchestrator | 2025-11-01 14:10:39.108993 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-11-01 14:10:39.108998 | orchestrator | Saturday 01 November 2025 14:09:54 +0000 (0:00:01.276) 0:06:14.360 ***** 2025-11-01 14:10:39.109002 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:10:39.109007 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:10:39.109013 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:10:39.109018 | orchestrator | 2025-11-01 14:10:39.109022 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-11-01 14:10:39.109027 | orchestrator | Saturday 01 November 2025 14:09:55 +0000 (0:00:01.044) 0:06:15.404 ***** 2025-11-01 14:10:39.109031 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.109036 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.109040 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.109045 | orchestrator | 2025-11-01 14:10:39.109049 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-11-01 14:10:39.109054 | orchestrator | Saturday 01 November 2025 14:10:05 +0000 (0:00:09.933) 0:06:25.338 ***** 2025-11-01 14:10:39.109058 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:10:39.109063 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:10:39.109067 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:10:39.109072 | orchestrator | 2025-11-01 14:10:39.109076 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-11-01 14:10:39.109081 | orchestrator | Saturday 01 November 2025 14:10:06 +0000 (0:00:00.851) 0:06:26.190 ***** 2025-11-01 14:10:39.109085 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.109090 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.109094 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.109099 | orchestrator | 2025-11-01 14:10:39.109103 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-11-01 14:10:39.109108 | orchestrator | Saturday 01 November 2025 14:10:16 +0000 (0:00:10.156) 0:06:36.346 ***** 2025-11-01 14:10:39.109112 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:10:39.109117 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:10:39.109121 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:10:39.109126 | orchestrator | 2025-11-01 14:10:39.109130 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-11-01 14:10:39.109134 | orchestrator | Saturday 01 November 2025 14:10:21 +0000 (0:00:05.128) 0:06:41.474 ***** 2025-11-01 14:10:39.109139 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:10:39.109143 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:10:39.109148 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:10:39.109155 | orchestrator | 2025-11-01 14:10:39.109160 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-11-01 14:10:39.109165 | orchestrator | Saturday 01 November 2025 14:10:26 +0000 (0:00:04.677) 0:06:46.152 ***** 2025-11-01 14:10:39.109169 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.109173 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.109178 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.109182 | orchestrator | 2025-11-01 14:10:39.109187 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-11-01 14:10:39.109191 | orchestrator | Saturday 01 November 2025 14:10:26 +0000 (0:00:00.429) 0:06:46.581 ***** 2025-11-01 14:10:39.109196 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.109200 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.109205 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.109209 | orchestrator | 2025-11-01 14:10:39.109214 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-11-01 14:10:39.109218 | orchestrator | Saturday 01 November 2025 14:10:27 +0000 (0:00:00.404) 0:06:46.986 ***** 2025-11-01 14:10:39.109223 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.109227 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.109232 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.109236 | orchestrator | 2025-11-01 14:10:39.109241 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-11-01 14:10:39.109245 | orchestrator | Saturday 01 November 2025 14:10:27 +0000 (0:00:00.744) 0:06:47.731 ***** 2025-11-01 14:10:39.109250 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.109254 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.109259 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.109263 | orchestrator | 2025-11-01 14:10:39.109270 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-11-01 14:10:39.109275 | orchestrator | Saturday 01 November 2025 14:10:28 +0000 (0:00:00.372) 0:06:48.104 ***** 2025-11-01 14:10:39.109279 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.109284 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.109288 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.109293 | orchestrator | 2025-11-01 14:10:39.109297 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-11-01 14:10:39.109302 | orchestrator | Saturday 01 November 2025 14:10:28 +0000 (0:00:00.413) 0:06:48.518 ***** 2025-11-01 14:10:39.109306 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:10:39.109311 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:10:39.109315 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:10:39.109320 | orchestrator | 2025-11-01 14:10:39.109324 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-11-01 14:10:39.109329 | orchestrator | Saturday 01 November 2025 14:10:29 +0000 (0:00:00.435) 0:06:48.954 ***** 2025-11-01 14:10:39.109333 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:10:39.109338 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:10:39.109342 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:10:39.109347 | orchestrator | 2025-11-01 14:10:39.109351 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-11-01 14:10:39.109356 | orchestrator | Saturday 01 November 2025 14:10:34 +0000 (0:00:05.534) 0:06:54.489 ***** 2025-11-01 14:10:39.109360 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:10:39.109365 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:10:39.109369 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:10:39.109374 | orchestrator | 2025-11-01 14:10:39.109378 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:10:39.109383 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-11-01 14:10:39.109388 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-11-01 14:10:39.109395 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-11-01 14:10:39.109400 | orchestrator | 2025-11-01 14:10:39.109404 | orchestrator | 2025-11-01 14:10:39.109411 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:10:39.109415 | orchestrator | Saturday 01 November 2025 14:10:35 +0000 (0:00:00.939) 0:06:55.428 ***** 2025-11-01 14:10:39.109420 | orchestrator | =============================================================================== 2025-11-01 14:10:39.109424 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 10.16s 2025-11-01 14:10:39.109429 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.93s 2025-11-01 14:10:39.109433 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.46s 2025-11-01 14:10:39.109438 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.97s 2025-11-01 14:10:39.109442 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 5.53s 2025-11-01 14:10:39.109447 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 5.46s 2025-11-01 14:10:39.109451 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 5.13s 2025-11-01 14:10:39.109456 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.01s 2025-11-01 14:10:39.109460 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.96s 2025-11-01 14:10:39.109464 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.84s 2025-11-01 14:10:39.109469 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.74s 2025-11-01 14:10:39.109473 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.68s 2025-11-01 14:10:39.109478 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.63s 2025-11-01 14:10:39.109514 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.57s 2025-11-01 14:10:39.109519 | orchestrator | haproxy-config : Copying over ceph-rgw haproxy config ------------------- 4.39s 2025-11-01 14:10:39.109523 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 4.31s 2025-11-01 14:10:39.109528 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.26s 2025-11-01 14:10:39.109532 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.23s 2025-11-01 14:10:39.109537 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.23s 2025-11-01 14:10:39.109541 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.14s 2025-11-01 14:10:39.109546 | orchestrator | 2025-11-01 14:10:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:10:42.145454 | orchestrator | 2025-11-01 14:10:42 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:10:42.147447 | orchestrator | 2025-11-01 14:10:42 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:10:42.149133 | orchestrator | 2025-11-01 14:10:42 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:10:42.149695 | orchestrator | 2025-11-01 14:10:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:10:45.182954 | orchestrator | 2025-11-01 14:10:45 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:10:45.184244 | orchestrator | 2025-11-01 14:10:45 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:10:45.185647 | orchestrator | 2025-11-01 14:10:45 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:10:45.185661 | orchestrator | 2025-11-01 14:10:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:10:48.223583 | orchestrator | 2025-11-01 14:10:48 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:10:48.223923 | orchestrator | 2025-11-01 14:10:48 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:10:48.224805 | orchestrator | 2025-11-01 14:10:48 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:10:48.225005 | orchestrator | 2025-11-01 14:10:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:10:51.273873 | orchestrator | 2025-11-01 14:10:51 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:10:51.276222 | orchestrator | 2025-11-01 14:10:51 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:10:51.278623 | orchestrator | 2025-11-01 14:10:51 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:10:51.278656 | orchestrator | 2025-11-01 14:10:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:10:54.313957 | orchestrator | 2025-11-01 14:10:54 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:10:54.314381 | orchestrator | 2025-11-01 14:10:54 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:10:54.316749 | orchestrator | 2025-11-01 14:10:54 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:10:54.316769 | orchestrator | 2025-11-01 14:10:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:10:57.362092 | orchestrator | 2025-11-01 14:10:57 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:10:57.365037 | orchestrator | 2025-11-01 14:10:57 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:10:57.367015 | orchestrator | 2025-11-01 14:10:57 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:10:57.367266 | orchestrator | 2025-11-01 14:10:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:11:00.407105 | orchestrator | 2025-11-01 14:11:00 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:11:00.407770 | orchestrator | 2025-11-01 14:11:00 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:11:00.408818 | orchestrator | 2025-11-01 14:11:00 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:11:00.408838 | orchestrator | 2025-11-01 14:11:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:11:03.465357 | orchestrator | 2025-11-01 14:11:03 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:11:03.469191 | orchestrator | 2025-11-01 14:11:03 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:11:03.469786 | orchestrator | 2025-11-01 14:11:03 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:11:03.469809 | orchestrator | 2025-11-01 14:11:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:11:06.529176 | orchestrator | 2025-11-01 14:11:06 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:11:06.531573 | orchestrator | 2025-11-01 14:11:06 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:11:06.531988 | orchestrator | 2025-11-01 14:11:06 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:11:06.532657 | orchestrator | 2025-11-01 14:11:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:11:09.686269 | orchestrator | 2025-11-01 14:11:09 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:11:09.691423 | orchestrator | 2025-11-01 14:11:09 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:11:09.692136 | orchestrator | 2025-11-01 14:11:09 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:11:09.692362 | orchestrator | 2025-11-01 14:11:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:11:12.732355 | orchestrator | 2025-11-01 14:11:12 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:11:12.734710 | orchestrator | 2025-11-01 14:11:12 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:11:12.738614 | orchestrator | 2025-11-01 14:11:12 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:11:12.738643 | orchestrator | 2025-11-01 14:11:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:11:15.777375 | orchestrator | 2025-11-01 14:11:15 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:11:15.777937 | orchestrator | 2025-11-01 14:11:15 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:11:15.778580 | orchestrator | 2025-11-01 14:11:15 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:11:15.778613 | orchestrator | 2025-11-01 14:11:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:11:18.825837 | orchestrator | 2025-11-01 14:11:18 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:11:18.827864 | orchestrator | 2025-11-01 14:11:18 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:11:18.830195 | orchestrator | 2025-11-01 14:11:18 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:11:18.830247 | orchestrator | 2025-11-01 14:11:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:11:21.867132 | orchestrator | 2025-11-01 14:11:21 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:11:21.867228 | orchestrator | 2025-11-01 14:11:21 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:11:21.867743 | orchestrator | 2025-11-01 14:11:21 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:11:21.867770 | orchestrator | 2025-11-01 14:11:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:11:24.912414 | orchestrator | 2025-11-01 14:11:24 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:11:24.913973 | orchestrator | 2025-11-01 14:11:24 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:11:24.915348 | orchestrator | 2025-11-01 14:11:24 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:11:24.915385 | orchestrator | 2025-11-01 14:11:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:11:27.964428 | orchestrator | 2025-11-01 14:11:27 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:11:27.966145 | orchestrator | 2025-11-01 14:11:27 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:11:27.967941 | orchestrator | 2025-11-01 14:11:27 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:11:27.967962 | orchestrator | 2025-11-01 14:11:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:11:31.006653 | orchestrator | 2025-11-01 14:11:31 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:11:31.007906 | orchestrator | 2025-11-01 14:11:31 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:11:31.009437 | orchestrator | 2025-11-01 14:11:31 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:11:31.009460 | orchestrator | 2025-11-01 14:11:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:11:34.045854 | orchestrator | 2025-11-01 14:11:34 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:11:34.047958 | orchestrator | 2025-11-01 14:11:34 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:11:34.050486 | orchestrator | 2025-11-01 14:11:34 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:11:34.050963 | orchestrator | 2025-11-01 14:11:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:11:37.094457 | orchestrator | 2025-11-01 14:11:37 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:11:37.097424 | orchestrator | 2025-11-01 14:11:37 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:11:37.101705 | orchestrator | 2025-11-01 14:11:37 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:11:37.102442 | orchestrator | 2025-11-01 14:11:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:11:40.145973 | orchestrator | 2025-11-01 14:11:40 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:11:40.147565 | orchestrator | 2025-11-01 14:11:40 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:11:40.149904 | orchestrator | 2025-11-01 14:11:40 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:11:40.149929 | orchestrator | 2025-11-01 14:11:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:11:43.195041 | orchestrator | 2025-11-01 14:11:43 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:11:43.197794 | orchestrator | 2025-11-01 14:11:43 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:11:43.199769 | orchestrator | 2025-11-01 14:11:43 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:11:43.200764 | orchestrator | 2025-11-01 14:11:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:11:46.246249 | orchestrator | 2025-11-01 14:11:46 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:11:46.247481 | orchestrator | 2025-11-01 14:11:46 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:11:46.248960 | orchestrator | 2025-11-01 14:11:46 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:11:46.249103 | orchestrator | 2025-11-01 14:11:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:11:49.302210 | orchestrator | 2025-11-01 14:11:49 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:11:49.304266 | orchestrator | 2025-11-01 14:11:49 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:11:49.304854 | orchestrator | 2025-11-01 14:11:49 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:11:49.304873 | orchestrator | 2025-11-01 14:11:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:11:52.361911 | orchestrator | 2025-11-01 14:11:52 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:11:52.362336 | orchestrator | 2025-11-01 14:11:52 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:11:52.363935 | orchestrator | 2025-11-01 14:11:52 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:11:52.363985 | orchestrator | 2025-11-01 14:11:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:11:55.415006 | orchestrator | 2025-11-01 14:11:55 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:11:55.415917 | orchestrator | 2025-11-01 14:11:55 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:11:55.417882 | orchestrator | 2025-11-01 14:11:55 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:11:55.418063 | orchestrator | 2025-11-01 14:11:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:11:58.468923 | orchestrator | 2025-11-01 14:11:58 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:11:58.469845 | orchestrator | 2025-11-01 14:11:58 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:11:58.470821 | orchestrator | 2025-11-01 14:11:58 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:11:58.471110 | orchestrator | 2025-11-01 14:11:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:12:01.511193 | orchestrator | 2025-11-01 14:12:01 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:12:01.514687 | orchestrator | 2025-11-01 14:12:01 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:12:01.516908 | orchestrator | 2025-11-01 14:12:01 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:12:01.516931 | orchestrator | 2025-11-01 14:12:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:12:04.555803 | orchestrator | 2025-11-01 14:12:04 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:12:04.558431 | orchestrator | 2025-11-01 14:12:04 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:12:04.561083 | orchestrator | 2025-11-01 14:12:04 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:12:04.561670 | orchestrator | 2025-11-01 14:12:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:12:07.614787 | orchestrator | 2025-11-01 14:12:07 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:12:07.618726 | orchestrator | 2025-11-01 14:12:07 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:12:07.620898 | orchestrator | 2025-11-01 14:12:07 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:12:07.621184 | orchestrator | 2025-11-01 14:12:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:12:10.674323 | orchestrator | 2025-11-01 14:12:10 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:12:10.675290 | orchestrator | 2025-11-01 14:12:10 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:12:10.676430 | orchestrator | 2025-11-01 14:12:10 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:12:10.676452 | orchestrator | 2025-11-01 14:12:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:12:13.716783 | orchestrator | 2025-11-01 14:12:13 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:12:13.718108 | orchestrator | 2025-11-01 14:12:13 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:12:13.718708 | orchestrator | 2025-11-01 14:12:13 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:12:13.718755 | orchestrator | 2025-11-01 14:12:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:12:16.775516 | orchestrator | 2025-11-01 14:12:16 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:12:16.777137 | orchestrator | 2025-11-01 14:12:16 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:12:16.779461 | orchestrator | 2025-11-01 14:12:16 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:12:16.779485 | orchestrator | 2025-11-01 14:12:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:12:19.825142 | orchestrator | 2025-11-01 14:12:19 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:12:19.827769 | orchestrator | 2025-11-01 14:12:19 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:12:19.829553 | orchestrator | 2025-11-01 14:12:19 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:12:19.829578 | orchestrator | 2025-11-01 14:12:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:12:22.865302 | orchestrator | 2025-11-01 14:12:22 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:12:22.866274 | orchestrator | 2025-11-01 14:12:22 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:12:22.867079 | orchestrator | 2025-11-01 14:12:22 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:12:22.867102 | orchestrator | 2025-11-01 14:12:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:12:25.920410 | orchestrator | 2025-11-01 14:12:25 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:12:25.923651 | orchestrator | 2025-11-01 14:12:25 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:12:25.925044 | orchestrator | 2025-11-01 14:12:25 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:12:25.925427 | orchestrator | 2025-11-01 14:12:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:12:28.968844 | orchestrator | 2025-11-01 14:12:28 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:12:28.971182 | orchestrator | 2025-11-01 14:12:28 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:12:28.974242 | orchestrator | 2025-11-01 14:12:28 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:12:28.974263 | orchestrator | 2025-11-01 14:12:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:12:32.015676 | orchestrator | 2025-11-01 14:12:32 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:12:32.016993 | orchestrator | 2025-11-01 14:12:32 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:12:32.018849 | orchestrator | 2025-11-01 14:12:32 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:12:32.018886 | orchestrator | 2025-11-01 14:12:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:12:35.059070 | orchestrator | 2025-11-01 14:12:35 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:12:35.060157 | orchestrator | 2025-11-01 14:12:35 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:12:35.061262 | orchestrator | 2025-11-01 14:12:35 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:12:35.061285 | orchestrator | 2025-11-01 14:12:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:12:38.113541 | orchestrator | 2025-11-01 14:12:38 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:12:38.115621 | orchestrator | 2025-11-01 14:12:38 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:12:38.119029 | orchestrator | 2025-11-01 14:12:38 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:12:38.119672 | orchestrator | 2025-11-01 14:12:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:12:41.169687 | orchestrator | 2025-11-01 14:12:41 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:12:41.171307 | orchestrator | 2025-11-01 14:12:41 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:12:41.175020 | orchestrator | 2025-11-01 14:12:41 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:12:41.175193 | orchestrator | 2025-11-01 14:12:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:12:44.225629 | orchestrator | 2025-11-01 14:12:44 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:12:44.226598 | orchestrator | 2025-11-01 14:12:44 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:12:44.228907 | orchestrator | 2025-11-01 14:12:44 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:12:44.228929 | orchestrator | 2025-11-01 14:12:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:12:47.294940 | orchestrator | 2025-11-01 14:12:47 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:12:47.297349 | orchestrator | 2025-11-01 14:12:47 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:12:47.300305 | orchestrator | 2025-11-01 14:12:47 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:12:47.300607 | orchestrator | 2025-11-01 14:12:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:12:50.348025 | orchestrator | 2025-11-01 14:12:50 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:12:50.348959 | orchestrator | 2025-11-01 14:12:50 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:12:50.350991 | orchestrator | 2025-11-01 14:12:50 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:12:50.351204 | orchestrator | 2025-11-01 14:12:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:12:53.397047 | orchestrator | 2025-11-01 14:12:53 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:12:53.399464 | orchestrator | 2025-11-01 14:12:53 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:12:53.401819 | orchestrator | 2025-11-01 14:12:53 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:12:53.402257 | orchestrator | 2025-11-01 14:12:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:12:56.455557 | orchestrator | 2025-11-01 14:12:56 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:12:56.456706 | orchestrator | 2025-11-01 14:12:56 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:12:56.458110 | orchestrator | 2025-11-01 14:12:56 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:12:56.458136 | orchestrator | 2025-11-01 14:12:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:12:59.500340 | orchestrator | 2025-11-01 14:12:59 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:12:59.502121 | orchestrator | 2025-11-01 14:12:59 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state STARTED 2025-11-01 14:12:59.503864 | orchestrator | 2025-11-01 14:12:59 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:12:59.503890 | orchestrator | 2025-11-01 14:12:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:13:02.548837 | orchestrator | 2025-11-01 14:13:02 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:13:02.554870 | orchestrator | 2025-11-01 14:13:02 | INFO  | Task 827d0b60-b849-4d8d-82b2-345aefa66109 is in state SUCCESS 2025-11-01 14:13:02.557261 | orchestrator | 2025-11-01 14:13:02.557302 | orchestrator | 2025-11-01 14:13:02.557314 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-11-01 14:13:02.557324 | orchestrator | 2025-11-01 14:13:02.557334 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-11-01 14:13:02.557344 | orchestrator | Saturday 01 November 2025 14:01:04 +0000 (0:00:00.910) 0:00:00.910 ***** 2025-11-01 14:13:02.557355 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:13:02.557366 | orchestrator | 2025-11-01 14:13:02.557376 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-11-01 14:13:02.557386 | orchestrator | Saturday 01 November 2025 14:01:05 +0000 (0:00:01.519) 0:00:02.430 ***** 2025-11-01 14:13:02.557396 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.557443 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.557453 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.557475 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.557541 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.557564 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.557574 | orchestrator | 2025-11-01 14:13:02.557716 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-11-01 14:13:02.557729 | orchestrator | Saturday 01 November 2025 14:01:07 +0000 (0:00:01.892) 0:00:04.322 ***** 2025-11-01 14:13:02.557738 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.557759 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.557770 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.557779 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.557789 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.557798 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.557808 | orchestrator | 2025-11-01 14:13:02.557817 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-11-01 14:13:02.557827 | orchestrator | Saturday 01 November 2025 14:01:08 +0000 (0:00:01.373) 0:00:05.695 ***** 2025-11-01 14:13:02.557836 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.557846 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.557858 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.557869 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.557879 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.557890 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.557901 | orchestrator | 2025-11-01 14:13:02.557911 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-11-01 14:13:02.557922 | orchestrator | Saturday 01 November 2025 14:01:09 +0000 (0:00:00.942) 0:00:06.638 ***** 2025-11-01 14:13:02.557933 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.557944 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.557955 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.557965 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.557976 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.557986 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.557997 | orchestrator | 2025-11-01 14:13:02.558008 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-11-01 14:13:02.558114 | orchestrator | Saturday 01 November 2025 14:01:10 +0000 (0:00:00.716) 0:00:07.355 ***** 2025-11-01 14:13:02.558207 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.558230 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.558274 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.558284 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.558342 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.558354 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.558364 | orchestrator | 2025-11-01 14:13:02.558374 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-11-01 14:13:02.558494 | orchestrator | Saturday 01 November 2025 14:01:11 +0000 (0:00:00.865) 0:00:08.221 ***** 2025-11-01 14:13:02.558531 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.558540 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.558550 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.558559 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.558569 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.558578 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.558588 | orchestrator | 2025-11-01 14:13:02.558597 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-11-01 14:13:02.558607 | orchestrator | Saturday 01 November 2025 14:01:12 +0000 (0:00:00.801) 0:00:09.022 ***** 2025-11-01 14:13:02.558617 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.558628 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.558637 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.558647 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.558656 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.558665 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.558675 | orchestrator | 2025-11-01 14:13:02.558684 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-11-01 14:13:02.558705 | orchestrator | Saturday 01 November 2025 14:01:12 +0000 (0:00:00.705) 0:00:09.728 ***** 2025-11-01 14:13:02.558715 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.558725 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.558734 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.558744 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.558754 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.558763 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.558772 | orchestrator | 2025-11-01 14:13:02.558782 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-11-01 14:13:02.558792 | orchestrator | Saturday 01 November 2025 14:01:13 +0000 (0:00:00.868) 0:00:10.596 ***** 2025-11-01 14:13:02.558801 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-01 14:13:02.558811 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-01 14:13:02.558934 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-01 14:13:02.558960 | orchestrator | 2025-11-01 14:13:02.558970 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-11-01 14:13:02.558980 | orchestrator | Saturday 01 November 2025 14:01:14 +0000 (0:00:00.783) 0:00:11.379 ***** 2025-11-01 14:13:02.559021 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.559033 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.559078 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.559090 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.559110 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.559119 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.559129 | orchestrator | 2025-11-01 14:13:02.559152 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-11-01 14:13:02.559162 | orchestrator | Saturday 01 November 2025 14:01:15 +0000 (0:00:01.147) 0:00:12.527 ***** 2025-11-01 14:13:02.559172 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-01 14:13:02.559181 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-01 14:13:02.559191 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-01 14:13:02.559209 | orchestrator | 2025-11-01 14:13:02.559219 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-11-01 14:13:02.559229 | orchestrator | Saturday 01 November 2025 14:01:18 +0000 (0:00:03.148) 0:00:15.676 ***** 2025-11-01 14:13:02.559238 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-11-01 14:13:02.559248 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-11-01 14:13:02.559258 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-11-01 14:13:02.559267 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.559277 | orchestrator | 2025-11-01 14:13:02.559286 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-11-01 14:13:02.559296 | orchestrator | Saturday 01 November 2025 14:01:19 +0000 (0:00:00.610) 0:00:16.286 ***** 2025-11-01 14:13:02.559307 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.559320 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.559455 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.559478 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.559488 | orchestrator | 2025-11-01 14:13:02.559498 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-11-01 14:13:02.559594 | orchestrator | Saturday 01 November 2025 14:01:20 +0000 (0:00:00.754) 0:00:17.041 ***** 2025-11-01 14:13:02.559606 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.559618 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.559628 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.559638 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.559648 | orchestrator | 2025-11-01 14:13:02.559658 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-11-01 14:13:02.559667 | orchestrator | Saturday 01 November 2025 14:01:20 +0000 (0:00:00.569) 0:00:17.610 ***** 2025-11-01 14:13:02.559694 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-11-01 14:01:16.352775', 'end': '2025-11-01 14:01:16.638445', 'delta': '0:00:00.285670', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.559717 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-11-01 14:01:17.715310', 'end': '2025-11-01 14:01:17.954949', 'delta': '0:00:00.239639', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.559727 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-11-01 14:01:18.418545', 'end': '2025-11-01 14:01:18.692321', 'delta': '0:00:00.273776', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.559737 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.559747 | orchestrator | 2025-11-01 14:13:02.559757 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-11-01 14:13:02.559766 | orchestrator | Saturday 01 November 2025 14:01:21 +0000 (0:00:00.216) 0:00:17.827 ***** 2025-11-01 14:13:02.559776 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.559785 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.559795 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.559816 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.559826 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.559835 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.559845 | orchestrator | 2025-11-01 14:13:02.559854 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-11-01 14:13:02.559864 | orchestrator | Saturday 01 November 2025 14:01:22 +0000 (0:00:01.701) 0:00:19.528 ***** 2025-11-01 14:13:02.559873 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-01 14:13:02.559883 | orchestrator | 2025-11-01 14:13:02.559892 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-11-01 14:13:02.559926 | orchestrator | Saturday 01 November 2025 14:01:23 +0000 (0:00:00.807) 0:00:20.336 ***** 2025-11-01 14:13:02.559999 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.560010 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.560086 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.560106 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.560122 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.560139 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.560147 | orchestrator | 2025-11-01 14:13:02.560155 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-11-01 14:13:02.560163 | orchestrator | Saturday 01 November 2025 14:01:25 +0000 (0:00:02.006) 0:00:22.342 ***** 2025-11-01 14:13:02.560170 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.560178 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.560186 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.560194 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.560208 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.560215 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.560223 | orchestrator | 2025-11-01 14:13:02.560231 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-11-01 14:13:02.560239 | orchestrator | Saturday 01 November 2025 14:01:29 +0000 (0:00:03.928) 0:00:26.271 ***** 2025-11-01 14:13:02.560246 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.560254 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.560262 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.560269 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.560277 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.560285 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.560292 | orchestrator | 2025-11-01 14:13:02.560300 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-11-01 14:13:02.560308 | orchestrator | Saturday 01 November 2025 14:01:30 +0000 (0:00:01.377) 0:00:27.648 ***** 2025-11-01 14:13:02.560316 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.560323 | orchestrator | 2025-11-01 14:13:02.560331 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-11-01 14:13:02.560339 | orchestrator | Saturday 01 November 2025 14:01:31 +0000 (0:00:00.255) 0:00:27.904 ***** 2025-11-01 14:13:02.560347 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.560354 | orchestrator | 2025-11-01 14:13:02.560362 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-11-01 14:13:02.560370 | orchestrator | Saturday 01 November 2025 14:01:31 +0000 (0:00:00.612) 0:00:28.517 ***** 2025-11-01 14:13:02.560382 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.560390 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.560398 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.560405 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.560413 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.560488 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.560543 | orchestrator | 2025-11-01 14:13:02.560559 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-11-01 14:13:02.560567 | orchestrator | Saturday 01 November 2025 14:01:32 +0000 (0:00:01.050) 0:00:29.567 ***** 2025-11-01 14:13:02.560575 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.560582 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.560590 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.560598 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.560605 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.560613 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.560621 | orchestrator | 2025-11-01 14:13:02.560629 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-11-01 14:13:02.560636 | orchestrator | Saturday 01 November 2025 14:01:33 +0000 (0:00:00.790) 0:00:30.358 ***** 2025-11-01 14:13:02.560671 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.560680 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.560705 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.560713 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.560721 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.560729 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.560737 | orchestrator | 2025-11-01 14:13:02.560781 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-11-01 14:13:02.560787 | orchestrator | Saturday 01 November 2025 14:01:34 +0000 (0:00:00.796) 0:00:31.154 ***** 2025-11-01 14:13:02.560794 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.560808 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.560823 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.560838 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.560845 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.560851 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.560858 | orchestrator | 2025-11-01 14:13:02.560864 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-11-01 14:13:02.560876 | orchestrator | Saturday 01 November 2025 14:01:35 +0000 (0:00:01.171) 0:00:32.326 ***** 2025-11-01 14:13:02.560883 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.560889 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.560896 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.560902 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.560909 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.560915 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.560922 | orchestrator | 2025-11-01 14:13:02.560928 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-11-01 14:13:02.560935 | orchestrator | Saturday 01 November 2025 14:01:36 +0000 (0:00:00.929) 0:00:33.255 ***** 2025-11-01 14:13:02.560942 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.560948 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.560954 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.560961 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.560967 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.560974 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.560980 | orchestrator | 2025-11-01 14:13:02.560987 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-11-01 14:13:02.560994 | orchestrator | Saturday 01 November 2025 14:01:38 +0000 (0:00:01.808) 0:00:35.064 ***** 2025-11-01 14:13:02.561000 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.561007 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.561013 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.561020 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.561026 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.561033 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.561039 | orchestrator | 2025-11-01 14:13:02.561046 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-11-01 14:13:02.561052 | orchestrator | Saturday 01 November 2025 14:01:39 +0000 (0:00:00.810) 0:00:35.875 ***** 2025-11-01 14:13:02.561061 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--47edfe94--e799--500a--9f78--eae255c41273-osd--block--47edfe94--e799--500a--9f78--eae255c41273', 'dm-uuid-LVM-5iecivp83EVTPr28Zo82u3SmraqQlgMlOF259DuNwwbNlvXYyRFxWEqhT3Hwjj3T'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561069 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bf0a4791--ac15--5066--8808--a0a6deeb0cc9-osd--block--bf0a4791--ac15--5066--8808--a0a6deeb0cc9', 'dm-uuid-LVM-hFELSVlkHF2T1dngUyA28Zszw7xESt5CX4RRltN9L1kY8z3IVItlbqJ5Spb0z1k6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561085 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5630d3b4--f241--5aa8--9956--015e1822542e-osd--block--5630d3b4--f241--5aa8--9956--015e1822542e', 'dm-uuid-LVM-qVkuHQLPgmWWE2KI6ybDQxfNnLOCMMNUgFoysg5F3RFhAWIRm1IRmZHTEmyEA3hr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561093 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--efff7302--70e8--5bbc--90af--2166d1a25777-osd--block--efff7302--70e8--5bbc--90af--2166d1a25777', 'dm-uuid-LVM-64JnOfFwPenpvQr3sa3Knbc6XItP1ImhCmNfxnZcc0pZgEPfDBxMy1CqiNPlMPAh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561106 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561120 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561140 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561147 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561161 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561174 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561206 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8ee830d1--3d8f--5ecc--a4b4--c1bec6b9910f-osd--block--8ee830d1--3d8f--5ecc--a4b4--c1bec6b9910f', 'dm-uuid-LVM-fxhH0SkmSCqWU4Wy7dw5tLQClhfedOljDMkZCymSSYMmhuftj12v8Tpyva9L0mc8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561215 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561222 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7e540012--4fa7--591e--a498--149cbb5b09d9-osd--block--7e540012--4fa7--591e--a498--149cbb5b09d9', 'dm-uuid-LVM-eZCOPRchOkTotQpPgPuFhEXd8dSlq2Gd5ddJWqEgSxlUkQ9NQ1bOeX0PsU7Z3aN0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561238 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561252 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561272 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad', 'scsi-SQEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part1', 'scsi-SQEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part14', 'scsi-SQEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part15', 'scsi-SQEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part16', 'scsi-SQEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:13:02.561286 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561300 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--bf0a4791--ac15--5066--8808--a0a6deeb0cc9-osd--block--bf0a4791--ac15--5066--8808--a0a6deeb0cc9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lShadK-zkzo-yGlR-ygJ8-c3QC-QIx0-1fIdlx', 'scsi-0QEMU_QEMU_HARDDISK_08ca9d91-9929-4ba3-9cad-ed75b64a043e', 'scsi-SQEMU_QEMU_HARDDISK_08ca9d91-9929-4ba3-9cad-ed75b64a043e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:13:02.561309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561315 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561333 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5630d3b4--f241--5aa8--9956--015e1822542e-osd--block--5630d3b4--f241--5aa8--9956--015e1822542e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-75UM0C-uJ1p-OTB6-fYTA-kSPP-fCvT-SJk04U', 'scsi-0QEMU_QEMU_HARDDISK_072d7475-b9a0-4b66-89cc-e4fcf46016ff', 'scsi-SQEMU_QEMU_HARDDISK_072d7475-b9a0-4b66-89cc-e4fcf46016ff'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:13:02.561345 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561352 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561359 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5b7cda2-7cd1-4139-8c09-f2864ed6115a', 'scsi-SQEMU_QEMU_HARDDISK_d5b7cda2-7cd1-4139-8c09-f2864ed6115a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:13:02.561367 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-13-18-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:13:02.561443 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561450 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561490 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561518 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561526 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede', 'scsi-SQEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part1', 'scsi-SQEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part14', 'scsi-SQEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part15', 'scsi-SQEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part16', 'scsi-SQEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:13:02.561534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561542 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--47edfe94--e799--500a--9f78--eae255c41273-osd--block--47edfe94--e799--500a--9f78--eae255c41273'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YpnRQB-jBP1-82g6-g8fd-LeRA-d7Tm-eXHHyS', 'scsi-0QEMU_QEMU_HARDDISK_4fee078c-1565-4ab1-bdda-b8bebdd42045', 'scsi-SQEMU_QEMU_HARDDISK_4fee078c-1565-4ab1-bdda-b8bebdd42045'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:13:02.561564 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3', 'scsi-SQEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part1', 'scsi-SQEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part14', 'scsi-SQEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part15', 'scsi-SQEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part16', 'scsi-SQEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:13:02.561572 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--efff7302--70e8--5bbc--90af--2166d1a25777-osd--block--efff7302--70e8--5bbc--90af--2166d1a25777'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iiuCsz-VrdB-dgiu-Kx5a-URcy-cBxF-vraN5N', 'scsi-0QEMU_QEMU_HARDDISK_c17a8236-4766-4598-abab-5d58d5ce65a6', 'scsi-SQEMU_QEMU_HARDDISK_c17a8236-4766-4598-abab-5d58d5ce65a6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:13:02.561580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561597 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8ee830d1--3d8f--5ecc--a4b4--c1bec6b9910f-osd--block--8ee830d1--3d8f--5ecc--a4b4--c1bec6b9910f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kj0JoU-vxqx-oB5o-AIwD-oewZ-c72h-zh64Ec', 'scsi-0QEMU_QEMU_HARDDISK_dbba508b-4e10-452f-8431-011284f42e7d', 'scsi-SQEMU_QEMU_HARDDISK_dbba508b-4e10-452f-8431-011284f42e7d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:13:02.561614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561649 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--7e540012--4fa7--591e--a498--149cbb5b09d9-osd--block--7e540012--4fa7--591e--a498--149cbb5b09d9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-chllP0-9PgN-Y42z-FzFx-ub4p-LQKx-I4sZ4l', 'scsi-0QEMU_QEMU_HARDDISK_f57a5620-543a-43ae-a22d-8a42cad6fb24', 'scsi-SQEMU_QEMU_HARDDISK_f57a5620-543a-43ae-a22d-8a42cad6fb24'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:13:02.561656 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d89e604-ccfa-4ce6-abe5-76180138882d', 'scsi-SQEMU_QEMU_HARDDISK_7d89e604-ccfa-4ce6-abe5-76180138882d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:13:02.561667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561674 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.561692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c449378-6b5d-40f6-b01f-2139793b2b74', 'scsi-SQEMU_QEMU_HARDDISK_5c449378-6b5d-40f6-b01f-2139793b2b74'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c449378-6b5d-40f6-b01f-2139793b2b74-part1', 'scsi-SQEMU_QEMU_HARDDISK_5c449378-6b5d-40f6-b01f-2139793b2b74-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c449378-6b5d-40f6-b01f-2139793b2b74-part14', 'scsi-SQEMU_QEMU_HARDDISK_5c449378-6b5d-40f6-b01f-2139793b2b74-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c449378-6b5d-40f6-b01f-2139793b2b74-part15', 'scsi-SQEMU_QEMU_HARDDISK_5c449378-6b5d-40f6-b01f-2139793b2b74-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c449378-6b5d-40f6-b01f-2139793b2b74-part16', 'scsi-SQEMU_QEMU_HARDDISK_5c449378-6b5d-40f6-b01f-2139793b2b74-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:13:02.561700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-13-18-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:13:02.561707 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c347dc72-435c-43d5-a9cf-2c60f1de142e', 'scsi-SQEMU_QEMU_HARDDISK_c347dc72-435c-43d5-a9cf-2c60f1de142e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:13:02.561718 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-13-18-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:13:02.561733 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-13-18-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:13:02.561740 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.561747 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.561754 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.561769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1202cd8-baed-4ac9-b605-f8cc9e76d4d5', 'scsi-SQEMU_QEMU_HARDDISK_f1202cd8-baed-4ac9-b605-f8cc9e76d4d5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1202cd8-baed-4ac9-b605-f8cc9e76d4d5-part1', 'scsi-SQEMU_QEMU_HARDDISK_f1202cd8-baed-4ac9-b605-f8cc9e76d4d5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1202cd8-baed-4ac9-b605-f8cc9e76d4d5-part14', 'scsi-SQEMU_QEMU_HARDDISK_f1202cd8-baed-4ac9-b605-f8cc9e76d4d5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1202cd8-baed-4ac9-b605-f8cc9e76d4d5-part15', 'scsi-SQEMU_QEMU_HARDDISK_f1202cd8-baed-4ac9-b605-f8cc9e76d4d5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1202cd8-baed-4ac9-b605-f8cc9e76d4d5-part16', 'scsi-SQEMU_QEMU_HARDDISK_f1202cd8-baed-4ac9-b605-f8cc9e76d4d5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:13:02.561864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-13-18-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:13:02.561898 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.561906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:13:02.561984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca5cc0c4-c9f2-4f36-a6e7-4d61f5e49740', 'scsi-SQEMU_QEMU_HARDDISK_ca5cc0c4-c9f2-4f36-a6e7-4d61f5e49740'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca5cc0c4-c9f2-4f36-a6e7-4d61f5e49740-part1', 'scsi-SQEMU_QEMU_HARDDISK_ca5cc0c4-c9f2-4f36-a6e7-4d61f5e49740-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca5cc0c4-c9f2-4f36-a6e7-4d61f5e49740-part14', 'scsi-SQEMU_QEMU_HARDDISK_ca5cc0c4-c9f2-4f36-a6e7-4d61f5e49740-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca5cc0c4-c9f2-4f36-a6e7-4d61f5e49740-part15', 'scsi-SQEMU_QEMU_HARDDISK_ca5cc0c4-c9f2-4f36-a6e7-4d61f5e49740-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca5cc0c4-c9f2-4f36-a6e7-4d61f5e49740-part16', 'scsi-SQEMU_QEMU_HARDDISK_ca5cc0c4-c9f2-4f36-a6e7-4d61f5e49740-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:13:02.562004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-13-18-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:13:02.562012 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.562080 | orchestrator | 2025-11-01 14:13:02.562088 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-11-01 14:13:02.562095 | orchestrator | Saturday 01 November 2025 14:01:40 +0000 (0:00:01.622) 0:00:37.498 ***** 2025-11-01 14:13:02.562118 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--47edfe94--e799--500a--9f78--eae255c41273-osd--block--47edfe94--e799--500a--9f78--eae255c41273', 'dm-uuid-LVM-5iecivp83EVTPr28Zo82u3SmraqQlgMlOF259DuNwwbNlvXYyRFxWEqhT3Hwjj3T'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562135 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bf0a4791--ac15--5066--8808--a0a6deeb0cc9-osd--block--bf0a4791--ac15--5066--8808--a0a6deeb0cc9', 'dm-uuid-LVM-hFELSVlkHF2T1dngUyA28Zszw7xESt5CX4RRltN9L1kY8z3IVItlbqJ5Spb0z1k6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562142 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5630d3b4--f241--5aa8--9956--015e1822542e-osd--block--5630d3b4--f241--5aa8--9956--015e1822542e', 'dm-uuid-LVM-qVkuHQLPgmWWE2KI6ybDQxfNnLOCMMNUgFoysg5F3RFhAWIRm1IRmZHTEmyEA3hr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562155 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562178 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--efff7302--70e8--5bbc--90af--2166d1a25777-osd--block--efff7302--70e8--5bbc--90af--2166d1a25777', 'dm-uuid-LVM-64JnOfFwPenpvQr3sa3Knbc6XItP1ImhCmNfxnZcc0pZgEPfDBxMy1CqiNPlMPAh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562186 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562193 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562200 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562211 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562218 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562225 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8ee830d1--3d8f--5ecc--a4b4--c1bec6b9910f-osd--block--8ee830d1--3d8f--5ecc--a4b4--c1bec6b9910f', 'dm-uuid-LVM-fxhH0SkmSCqWU4Wy7dw5tLQClhfedOljDMkZCymSSYMmhuftj12v8Tpyva9L0mc8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562240 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562247 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562254 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562261 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7e540012--4fa7--591e--a498--149cbb5b09d9-osd--block--7e540012--4fa7--591e--a498--149cbb5b09d9', 'dm-uuid-LVM-eZCOPRchOkTotQpPgPuFhEXd8dSlq2Gd5ddJWqEgSxlUkQ9NQ1bOeX0PsU7Z3aN0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562273 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562280 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562309 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562317 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562324 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562331 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562343 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562350 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562357 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562372 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562380 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad', 'scsi-SQEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part1', 'scsi-SQEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part14', 'scsi-SQEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part15', 'scsi-SQEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part16', 'scsi-SQEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562392 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562406 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--bf0a4791--ac15--5066--8808--a0a6deeb0cc9-osd--block--bf0a4791--ac15--5066--8808--a0a6deeb0cc9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lShadK-zkzo-yGlR-ygJ8-c3QC-QIx0-1fIdlx', 'scsi-0QEMU_QEMU_HARDDISK_08ca9d91-9929-4ba3-9cad-ed75b64a043e', 'scsi-SQEMU_QEMU_HARDDISK_08ca9d91-9929-4ba3-9cad-ed75b64a043e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562414 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5630d3b4--f241--5aa8--9956--015e1822542e-osd--block--5630d3b4--f241--5aa8--9956--015e1822542e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-75UM0C-uJ1p-OTB6-fYTA-kSPP-fCvT-SJk04U', 'scsi-0QEMU_QEMU_HARDDISK_072d7475-b9a0-4b66-89cc-e4fcf46016ff', 'scsi-SQEMU_QEMU_HARDDISK_072d7475-b9a0-4b66-89cc-e4fcf46016ff'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562421 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5b7cda2-7cd1-4139-8c09-f2864ed6115a', 'scsi-SQEMU_QEMU_HARDDISK_d5b7cda2-7cd1-4139-8c09-f2864ed6115a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562433 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-13-18-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562440 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562447 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562464 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562472 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede', 'scsi-SQEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part1', 'scsi-SQEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part14', 'scsi-SQEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part15', 'scsi-SQEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part16', 'scsi-SQEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562485 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--47edfe94--e799--500a--9f78--eae255c41273-osd--block--47edfe94--e799--500a--9f78--eae255c41273'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YpnRQB-jBP1-82g6-g8fd-LeRA-d7Tm-eXHHyS', 'scsi-0QEMU_QEMU_HARDDISK_4fee078c-1565-4ab1-bdda-b8bebdd42045', 'scsi-SQEMU_QEMU_HARDDISK_4fee078c-1565-4ab1-bdda-b8bebdd42045'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562516 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--efff7302--70e8--5bbc--90af--2166d1a25777-osd--block--efff7302--70e8--5bbc--90af--2166d1a25777'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iiuCsz-VrdB-dgiu-Kx5a-URcy-cBxF-vraN5N', 'scsi-0QEMU_QEMU_HARDDISK_c17a8236-4766-4598-abab-5d58d5ce65a6', 'scsi-SQEMU_QEMU_HARDDISK_c17a8236-4766-4598-abab-5d58d5ce65a6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562524 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562536 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3', 'scsi-SQEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part1', 'scsi-SQEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part14', 'scsi-SQEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part15', 'scsi-SQEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part16', 'scsi-SQEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562551 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d89e604-ccfa-4ce6-abe5-76180138882d', 'scsi-SQEMU_QEMU_HARDDISK_7d89e604-ccfa-4ce6-abe5-76180138882d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562559 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8ee830d1--3d8f--5ecc--a4b4--c1bec6b9910f-osd--block--8ee830d1--3d8f--5ecc--a4b4--c1bec6b9910f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kj0JoU-vxqx-oB5o-AIwD-oewZ-c72h-zh64Ec', 'scsi-0QEMU_QEMU_HARDDISK_dbba508b-4e10-452f-8431-011284f42e7d', 'scsi-SQEMU_QEMU_HARDDISK_dbba508b-4e10-452f-8431-011284f42e7d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562570 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--7e540012--4fa7--591e--a498--149cbb5b09d9-osd--block--7e540012--4fa7--591e--a498--149cbb5b09d9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-chllP0-9PgN-Y42z-FzFx-ub4p-LQKx-I4sZ4l', 'scsi-0QEMU_QEMU_HARDDISK_f57a5620-543a-43ae-a22d-8a42cad6fb24', 'scsi-SQEMU_QEMU_HARDDISK_f57a5620-543a-43ae-a22d-8a42cad6fb24'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562577 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-13-18-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562584 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c347dc72-435c-43d5-a9cf-2c60f1de142e', 'scsi-SQEMU_QEMU_HARDDISK_c347dc72-435c-43d5-a9cf-2c60f1de142e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562598 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562606 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-13-18-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562618 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562625 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562632 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562639 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562646 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562661 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562668 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562680 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c449378-6b5d-40f6-b01f-2139793b2b74', 'scsi-SQEMU_QEMU_HARDDISK_5c449378-6b5d-40f6-b01f-2139793b2b74'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c449378-6b5d-40f6-b01f-2139793b2b74-part1', 'scsi-SQEMU_QEMU_HARDDISK_5c449378-6b5d-40f6-b01f-2139793b2b74-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c449378-6b5d-40f6-b01f-2139793b2b74-part14', 'scsi-SQEMU_QEMU_HARDDISK_5c449378-6b5d-40f6-b01f-2139793b2b74-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c449378-6b5d-40f6-b01f-2139793b2b74-part15', 'scsi-SQEMU_QEMU_HARDDISK_5c449378-6b5d-40f6-b01f-2139793b2b74-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c449378-6b5d-40f6-b01f-2139793b2b74-part16', 'scsi-SQEMU_QEMU_HARDDISK_5c449378-6b5d-40f6-b01f-2139793b2b74-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562690 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-13-18-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562701 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.562708 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562720 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562727 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562734 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562741 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562748 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562762 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562773 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562780 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.562787 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.562794 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.562801 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca5cc0c4-c9f2-4f36-a6e7-4d61f5e49740', 'scsi-SQEMU_QEMU_HARDDISK_ca5cc0c4-c9f2-4f36-a6e7-4d61f5e49740'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca5cc0c4-c9f2-4f36-a6e7-4d61f5e49740-part1', 'scsi-SQEMU_QEMU_HARDDISK_ca5cc0c4-c9f2-4f36-a6e7-4d61f5e49740-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca5cc0c4-c9f2-4f36-a6e7-4d61f5e49740-part14', 'scsi-SQEMU_QEMU_HARDDISK_ca5cc0c4-c9f2-4f36-a6e7-4d61f5e49740-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca5cc0c4-c9f2-4f36-a6e7-4d61f5e49740-part15', 'scsi-SQEMU_QEMU_HARDDISK_ca5cc0c4-c9f2-4f36-a6e7-4d61f5e49740-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca5cc0c4-c9f2-4f36-a6e7-4d61f5e49740-part16', 'scsi-SQEMU_QEMU_HARDDISK_ca5cc0c4-c9f2-4f36-a6e7-4d61f5e49740-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562812 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-13-18-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562819 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.562837 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562844 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562851 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562857 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562864 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562871 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562887 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562899 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562906 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1202cd8-baed-4ac9-b605-f8cc9e76d4d5', 'scsi-SQEMU_QEMU_HARDDISK_f1202cd8-baed-4ac9-b605-f8cc9e76d4d5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1202cd8-baed-4ac9-b605-f8cc9e76d4d5-part1', 'scsi-SQEMU_QEMU_HARDDISK_f1202cd8-baed-4ac9-b605-f8cc9e76d4d5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1202cd8-baed-4ac9-b605-f8cc9e76d4d5-part14', 'scsi-SQEMU_QEMU_HARDDISK_f1202cd8-baed-4ac9-b605-f8cc9e76d4d5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1202cd8-baed-4ac9-b605-f8cc9e76d4d5-part15', 'scsi-SQEMU_QEMU_HARDDISK_f1202cd8-baed-4ac9-b605-f8cc9e76d4d5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f1202cd8-baed-4ac9-b605-f8cc9e76d4d5-part16', 'scsi-SQEMU_QEMU_HARDDISK_f1202cd8-baed-4ac9-b605-f8cc9e76d4d5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562917 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-13-18-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:13:02.562928 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.562935 | orchestrator | 2025-11-01 14:13:02.562941 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-11-01 14:13:02.562948 | orchestrator | Saturday 01 November 2025 14:01:43 +0000 (0:00:03.155) 0:00:40.653 ***** 2025-11-01 14:13:02.562959 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.562966 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.562973 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.562979 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.562986 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.562992 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.562999 | orchestrator | 2025-11-01 14:13:02.563005 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-11-01 14:13:02.563012 | orchestrator | Saturday 01 November 2025 14:01:45 +0000 (0:00:01.846) 0:00:42.500 ***** 2025-11-01 14:13:02.563018 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.563025 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.563031 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.563038 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.563053 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.563060 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.563066 | orchestrator | 2025-11-01 14:13:02.563073 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-11-01 14:13:02.563079 | orchestrator | Saturday 01 November 2025 14:01:46 +0000 (0:00:01.139) 0:00:43.640 ***** 2025-11-01 14:13:02.563086 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.563092 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.563099 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.563105 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.563112 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.563118 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.563125 | orchestrator | 2025-11-01 14:13:02.563131 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-11-01 14:13:02.563138 | orchestrator | Saturday 01 November 2025 14:01:49 +0000 (0:00:03.164) 0:00:46.805 ***** 2025-11-01 14:13:02.563144 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.563151 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.563157 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.563164 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.563170 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.563177 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.563183 | orchestrator | 2025-11-01 14:13:02.563190 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-11-01 14:13:02.563197 | orchestrator | Saturday 01 November 2025 14:01:51 +0000 (0:00:01.843) 0:00:48.648 ***** 2025-11-01 14:13:02.563203 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.563210 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.563216 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.563223 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.563229 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.563235 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.563242 | orchestrator | 2025-11-01 14:13:02.563249 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-11-01 14:13:02.563255 | orchestrator | Saturday 01 November 2025 14:01:53 +0000 (0:00:01.793) 0:00:50.442 ***** 2025-11-01 14:13:02.563262 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.563268 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.563275 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.563281 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.563287 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.563294 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.563300 | orchestrator | 2025-11-01 14:13:02.563307 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-11-01 14:13:02.563318 | orchestrator | Saturday 01 November 2025 14:01:54 +0000 (0:00:00.959) 0:00:51.401 ***** 2025-11-01 14:13:02.563325 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-11-01 14:13:02.563332 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-11-01 14:13:02.563338 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-11-01 14:13:02.563345 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-11-01 14:13:02.563351 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-11-01 14:13:02.563358 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-11-01 14:13:02.563364 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-11-01 14:13:02.563371 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-11-01 14:13:02.563377 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-11-01 14:13:02.563384 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-11-01 14:13:02.563390 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-11-01 14:13:02.563397 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-11-01 14:13:02.563403 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-11-01 14:13:02.563410 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-11-01 14:13:02.563416 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-11-01 14:13:02.563422 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-11-01 14:13:02.563429 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-11-01 14:13:02.563435 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-11-01 14:13:02.563442 | orchestrator | 2025-11-01 14:13:02.563448 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-11-01 14:13:02.563455 | orchestrator | Saturday 01 November 2025 14:01:59 +0000 (0:00:04.593) 0:00:55.995 ***** 2025-11-01 14:13:02.563461 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-11-01 14:13:02.563468 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-11-01 14:13:02.563475 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-11-01 14:13:02.563481 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.563488 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-11-01 14:13:02.563494 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-11-01 14:13:02.563538 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-11-01 14:13:02.563550 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.563557 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-11-01 14:13:02.563564 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-11-01 14:13:02.563583 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-11-01 14:13:02.563590 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.563597 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-11-01 14:13:02.563604 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-11-01 14:13:02.563610 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-11-01 14:13:02.563617 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.563623 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-11-01 14:13:02.563630 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-11-01 14:13:02.563636 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-11-01 14:13:02.563643 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.563649 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-11-01 14:13:02.563656 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-11-01 14:13:02.563662 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-11-01 14:13:02.563669 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.563675 | orchestrator | 2025-11-01 14:13:02.563682 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-11-01 14:13:02.563694 | orchestrator | Saturday 01 November 2025 14:02:00 +0000 (0:00:01.548) 0:00:57.544 ***** 2025-11-01 14:13:02.563700 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.563707 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.563713 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.563720 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:13:02.563727 | orchestrator | 2025-11-01 14:13:02.563734 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-11-01 14:13:02.563741 | orchestrator | Saturday 01 November 2025 14:02:02 +0000 (0:00:01.977) 0:00:59.521 ***** 2025-11-01 14:13:02.563747 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.563754 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.563760 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.563767 | orchestrator | 2025-11-01 14:13:02.563773 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-11-01 14:13:02.563780 | orchestrator | Saturday 01 November 2025 14:02:03 +0000 (0:00:00.925) 0:01:00.447 ***** 2025-11-01 14:13:02.563787 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.563793 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.563800 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.563806 | orchestrator | 2025-11-01 14:13:02.563813 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-11-01 14:13:02.563819 | orchestrator | Saturday 01 November 2025 14:02:04 +0000 (0:00:00.983) 0:01:01.431 ***** 2025-11-01 14:13:02.563826 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.563832 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.563839 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.563845 | orchestrator | 2025-11-01 14:13:02.563852 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-11-01 14:13:02.563859 | orchestrator | Saturday 01 November 2025 14:02:05 +0000 (0:00:00.957) 0:01:02.388 ***** 2025-11-01 14:13:02.563865 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.563872 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.563878 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.563885 | orchestrator | 2025-11-01 14:13:02.563891 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-11-01 14:13:02.563898 | orchestrator | Saturday 01 November 2025 14:02:06 +0000 (0:00:00.755) 0:01:03.143 ***** 2025-11-01 14:13:02.563905 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 14:13:02.563911 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 14:13:02.563918 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 14:13:02.563924 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.563931 | orchestrator | 2025-11-01 14:13:02.563937 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-11-01 14:13:02.563944 | orchestrator | Saturday 01 November 2025 14:02:07 +0000 (0:00:01.039) 0:01:04.183 ***** 2025-11-01 14:13:02.563950 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 14:13:02.563956 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 14:13:02.563962 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 14:13:02.563968 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.563974 | orchestrator | 2025-11-01 14:13:02.563980 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-11-01 14:13:02.563986 | orchestrator | Saturday 01 November 2025 14:02:07 +0000 (0:00:00.597) 0:01:04.780 ***** 2025-11-01 14:13:02.563992 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 14:13:02.563998 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 14:13:02.564004 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 14:13:02.564015 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.564021 | orchestrator | 2025-11-01 14:13:02.564027 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-11-01 14:13:02.564033 | orchestrator | Saturday 01 November 2025 14:02:08 +0000 (0:00:00.621) 0:01:05.401 ***** 2025-11-01 14:13:02.564039 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.564045 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.564051 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.564057 | orchestrator | 2025-11-01 14:13:02.564064 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-11-01 14:13:02.564073 | orchestrator | Saturday 01 November 2025 14:02:09 +0000 (0:00:00.717) 0:01:06.119 ***** 2025-11-01 14:13:02.564079 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-11-01 14:13:02.564085 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-11-01 14:13:02.564092 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-11-01 14:13:02.564098 | orchestrator | 2025-11-01 14:13:02.564108 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-11-01 14:13:02.564114 | orchestrator | Saturday 01 November 2025 14:02:10 +0000 (0:00:01.487) 0:01:07.606 ***** 2025-11-01 14:13:02.564120 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-01 14:13:02.564126 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-01 14:13:02.564132 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-01 14:13:02.564139 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-11-01 14:13:02.564145 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-11-01 14:13:02.564151 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-11-01 14:13:02.564157 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-11-01 14:13:02.564163 | orchestrator | 2025-11-01 14:13:02.564169 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-11-01 14:13:02.564175 | orchestrator | Saturday 01 November 2025 14:02:11 +0000 (0:00:00.827) 0:01:08.434 ***** 2025-11-01 14:13:02.564181 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-01 14:13:02.564187 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-01 14:13:02.564193 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-01 14:13:02.564199 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-11-01 14:13:02.564205 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-11-01 14:13:02.564211 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-11-01 14:13:02.564217 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-11-01 14:13:02.564224 | orchestrator | 2025-11-01 14:13:02.564230 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-01 14:13:02.564236 | orchestrator | Saturday 01 November 2025 14:02:13 +0000 (0:00:02.286) 0:01:10.721 ***** 2025-11-01 14:13:02.564242 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:13:02.564249 | orchestrator | 2025-11-01 14:13:02.564255 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-01 14:13:02.564261 | orchestrator | Saturday 01 November 2025 14:02:15 +0000 (0:00:01.437) 0:01:12.159 ***** 2025-11-01 14:13:02.564268 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:13:02.564274 | orchestrator | 2025-11-01 14:13:02.564280 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-01 14:13:02.564329 | orchestrator | Saturday 01 November 2025 14:02:16 +0000 (0:00:01.597) 0:01:13.756 ***** 2025-11-01 14:13:02.564336 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.564342 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.564348 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.564354 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.564360 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.564366 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.564372 | orchestrator | 2025-11-01 14:13:02.564378 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-01 14:13:02.564385 | orchestrator | Saturday 01 November 2025 14:02:18 +0000 (0:00:01.848) 0:01:15.604 ***** 2025-11-01 14:13:02.564391 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.564397 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.564403 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.564409 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.564415 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.564421 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.564427 | orchestrator | 2025-11-01 14:13:02.564433 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-01 14:13:02.564439 | orchestrator | Saturday 01 November 2025 14:02:20 +0000 (0:00:01.315) 0:01:16.920 ***** 2025-11-01 14:13:02.564445 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.564452 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.564458 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.564464 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.564470 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.564476 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.564482 | orchestrator | 2025-11-01 14:13:02.564488 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-01 14:13:02.564494 | orchestrator | Saturday 01 November 2025 14:02:22 +0000 (0:00:02.401) 0:01:19.321 ***** 2025-11-01 14:13:02.564513 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.564520 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.564526 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.564532 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.564538 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.564544 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.564550 | orchestrator | 2025-11-01 14:13:02.564556 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-01 14:13:02.564562 | orchestrator | Saturday 01 November 2025 14:02:24 +0000 (0:00:01.634) 0:01:20.956 ***** 2025-11-01 14:13:02.564568 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.564574 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.564584 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.564590 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.564596 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.564602 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.564608 | orchestrator | 2025-11-01 14:13:02.564615 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-01 14:13:02.564625 | orchestrator | Saturday 01 November 2025 14:02:25 +0000 (0:00:01.654) 0:01:22.610 ***** 2025-11-01 14:13:02.564631 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.564637 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.564643 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.564649 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.564655 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.564661 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.564667 | orchestrator | 2025-11-01 14:13:02.564673 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-01 14:13:02.564679 | orchestrator | Saturday 01 November 2025 14:02:26 +0000 (0:00:00.748) 0:01:23.358 ***** 2025-11-01 14:13:02.564685 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.564692 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.564702 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.564709 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.564715 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.564721 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.564727 | orchestrator | 2025-11-01 14:13:02.564733 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-01 14:13:02.564739 | orchestrator | Saturday 01 November 2025 14:02:27 +0000 (0:00:01.017) 0:01:24.376 ***** 2025-11-01 14:13:02.564745 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.564751 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.564757 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.564763 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.564769 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.564775 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.564781 | orchestrator | 2025-11-01 14:13:02.564787 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-01 14:13:02.564793 | orchestrator | Saturday 01 November 2025 14:02:28 +0000 (0:00:01.360) 0:01:25.737 ***** 2025-11-01 14:13:02.564799 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.564805 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.564811 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.564817 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.564823 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.564829 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.564835 | orchestrator | 2025-11-01 14:13:02.564842 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-01 14:13:02.564848 | orchestrator | Saturday 01 November 2025 14:02:30 +0000 (0:00:01.789) 0:01:27.527 ***** 2025-11-01 14:13:02.564854 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.564860 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.564866 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.564872 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.564878 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.564884 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.564890 | orchestrator | 2025-11-01 14:13:02.564896 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-01 14:13:02.564902 | orchestrator | Saturday 01 November 2025 14:02:31 +0000 (0:00:00.852) 0:01:28.379 ***** 2025-11-01 14:13:02.564909 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.564915 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.564921 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.564927 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.564933 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.564939 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.564945 | orchestrator | 2025-11-01 14:13:02.564951 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-01 14:13:02.564957 | orchestrator | Saturday 01 November 2025 14:02:32 +0000 (0:00:01.115) 0:01:29.495 ***** 2025-11-01 14:13:02.564963 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.564969 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.564975 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.564981 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.564987 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.564994 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.565000 | orchestrator | 2025-11-01 14:13:02.565006 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-01 14:13:02.565012 | orchestrator | Saturday 01 November 2025 14:02:33 +0000 (0:00:01.239) 0:01:30.735 ***** 2025-11-01 14:13:02.565018 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.565024 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.565030 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.565036 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.565042 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.565048 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.565054 | orchestrator | 2025-11-01 14:13:02.565065 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-01 14:13:02.565071 | orchestrator | Saturday 01 November 2025 14:02:34 +0000 (0:00:00.876) 0:01:31.611 ***** 2025-11-01 14:13:02.565077 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.565083 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.565089 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.565095 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.565101 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.565107 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.565113 | orchestrator | 2025-11-01 14:13:02.565120 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-01 14:13:02.565126 | orchestrator | Saturday 01 November 2025 14:02:35 +0000 (0:00:00.993) 0:01:32.604 ***** 2025-11-01 14:13:02.565132 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.565138 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.565144 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.565150 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.565156 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.565162 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.565168 | orchestrator | 2025-11-01 14:13:02.565174 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-01 14:13:02.565180 | orchestrator | Saturday 01 November 2025 14:02:36 +0000 (0:00:00.930) 0:01:33.535 ***** 2025-11-01 14:13:02.565186 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.565196 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.565202 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.565209 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.565215 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.565221 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.565227 | orchestrator | 2025-11-01 14:13:02.565236 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-01 14:13:02.565243 | orchestrator | Saturday 01 November 2025 14:02:37 +0000 (0:00:00.824) 0:01:34.360 ***** 2025-11-01 14:13:02.565249 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.565255 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.565261 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.565267 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.565273 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.565279 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.565285 | orchestrator | 2025-11-01 14:13:02.565291 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-01 14:13:02.565297 | orchestrator | Saturday 01 November 2025 14:02:38 +0000 (0:00:01.059) 0:01:35.419 ***** 2025-11-01 14:13:02.565303 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.565309 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.565315 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.565321 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.565327 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.565333 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.565339 | orchestrator | 2025-11-01 14:13:02.565345 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-01 14:13:02.565352 | orchestrator | Saturday 01 November 2025 14:02:39 +0000 (0:00:00.731) 0:01:36.151 ***** 2025-11-01 14:13:02.565358 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.565364 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.565370 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.565376 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.565382 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.565388 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.565394 | orchestrator | 2025-11-01 14:13:02.565400 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-11-01 14:13:02.565406 | orchestrator | Saturday 01 November 2025 14:02:40 +0000 (0:00:01.321) 0:01:37.473 ***** 2025-11-01 14:13:02.565412 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:13:02.565422 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:13:02.565428 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:13:02.565435 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:02.565441 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:02.565447 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:02.565453 | orchestrator | 2025-11-01 14:13:02.565459 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-11-01 14:13:02.565465 | orchestrator | Saturday 01 November 2025 14:02:42 +0000 (0:00:01.472) 0:01:38.946 ***** 2025-11-01 14:13:02.565471 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:13:02.565477 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:13:02.565483 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:13:02.565489 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:02.565495 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:02.565512 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:02.565519 | orchestrator | 2025-11-01 14:13:02.565525 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-11-01 14:13:02.565531 | orchestrator | Saturday 01 November 2025 14:02:44 +0000 (0:00:02.616) 0:01:41.563 ***** 2025-11-01 14:13:02.565537 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:13:02.565543 | orchestrator | 2025-11-01 14:13:02.565549 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-11-01 14:13:02.565556 | orchestrator | Saturday 01 November 2025 14:02:45 +0000 (0:00:01.205) 0:01:42.769 ***** 2025-11-01 14:13:02.565562 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.565568 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.565574 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.565580 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.565586 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.565592 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.565598 | orchestrator | 2025-11-01 14:13:02.565604 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-11-01 14:13:02.565610 | orchestrator | Saturday 01 November 2025 14:02:46 +0000 (0:00:00.689) 0:01:43.458 ***** 2025-11-01 14:13:02.565616 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.565622 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.565628 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.565634 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.565640 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.565646 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.565652 | orchestrator | 2025-11-01 14:13:02.565658 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-11-01 14:13:02.565664 | orchestrator | Saturday 01 November 2025 14:02:47 +0000 (0:00:00.923) 0:01:44.382 ***** 2025-11-01 14:13:02.565670 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-11-01 14:13:02.565676 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-11-01 14:13:02.565683 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-11-01 14:13:02.565689 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-11-01 14:13:02.565695 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-11-01 14:13:02.565701 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-11-01 14:13:02.565707 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-11-01 14:13:02.565713 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-11-01 14:13:02.565719 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-11-01 14:13:02.565725 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-11-01 14:13:02.565739 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-11-01 14:13:02.565749 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-11-01 14:13:02.565755 | orchestrator | 2025-11-01 14:13:02.565761 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-11-01 14:13:02.565767 | orchestrator | Saturday 01 November 2025 14:02:49 +0000 (0:00:01.455) 0:01:45.838 ***** 2025-11-01 14:13:02.565774 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:13:02.565780 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:13:02.565786 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:13:02.565792 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:02.565798 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:02.565804 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:02.565810 | orchestrator | 2025-11-01 14:13:02.565816 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-11-01 14:13:02.565822 | orchestrator | Saturday 01 November 2025 14:02:50 +0000 (0:00:01.479) 0:01:47.317 ***** 2025-11-01 14:13:02.565828 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.565834 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.565840 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.565846 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.565852 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.565858 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.565864 | orchestrator | 2025-11-01 14:13:02.565870 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-11-01 14:13:02.565877 | orchestrator | Saturday 01 November 2025 14:02:51 +0000 (0:00:00.779) 0:01:48.097 ***** 2025-11-01 14:13:02.565883 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.565889 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.565895 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.565901 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.565907 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.565912 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.565918 | orchestrator | 2025-11-01 14:13:02.565925 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-11-01 14:13:02.565931 | orchestrator | Saturday 01 November 2025 14:02:52 +0000 (0:00:01.128) 0:01:49.225 ***** 2025-11-01 14:13:02.565937 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.565943 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.565949 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.565955 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.565961 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.565967 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.565973 | orchestrator | 2025-11-01 14:13:02.565979 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-11-01 14:13:02.565985 | orchestrator | Saturday 01 November 2025 14:02:53 +0000 (0:00:00.760) 0:01:49.986 ***** 2025-11-01 14:13:02.565992 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:13:02.565998 | orchestrator | 2025-11-01 14:13:02.566004 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-11-01 14:13:02.566010 | orchestrator | Saturday 01 November 2025 14:02:54 +0000 (0:00:01.520) 0:01:51.506 ***** 2025-11-01 14:13:02.566035 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.566043 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.566049 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.566055 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.566061 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.566067 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.566073 | orchestrator | 2025-11-01 14:13:02.566079 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-11-01 14:13:02.566090 | orchestrator | Saturday 01 November 2025 14:03:40 +0000 (0:00:46.217) 0:02:37.724 ***** 2025-11-01 14:13:02.566097 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-11-01 14:13:02.566103 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-11-01 14:13:02.566151 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-11-01 14:13:02.566166 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.566172 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-11-01 14:13:02.566179 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-11-01 14:13:02.566185 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-11-01 14:13:02.566191 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.566197 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-11-01 14:13:02.566203 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-11-01 14:13:02.566209 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-11-01 14:13:02.566215 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.566221 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-11-01 14:13:02.566227 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-11-01 14:13:02.566233 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-11-01 14:13:02.566239 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.566246 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-11-01 14:13:02.566252 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-11-01 14:13:02.566261 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-11-01 14:13:02.566268 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.566274 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-11-01 14:13:02.566292 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-11-01 14:13:02.566309 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-11-01 14:13:02.566315 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.566321 | orchestrator | 2025-11-01 14:13:02.566328 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-11-01 14:13:02.566334 | orchestrator | Saturday 01 November 2025 14:03:41 +0000 (0:00:00.624) 0:02:38.348 ***** 2025-11-01 14:13:02.566340 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.566346 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.566352 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.566358 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.566364 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.566370 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.566376 | orchestrator | 2025-11-01 14:13:02.566382 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-11-01 14:13:02.566388 | orchestrator | Saturday 01 November 2025 14:03:42 +0000 (0:00:00.837) 0:02:39.186 ***** 2025-11-01 14:13:02.566394 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.566401 | orchestrator | 2025-11-01 14:13:02.566407 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-11-01 14:13:02.566413 | orchestrator | Saturday 01 November 2025 14:03:42 +0000 (0:00:00.157) 0:02:39.344 ***** 2025-11-01 14:13:02.566419 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.566425 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.566431 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.566437 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.566443 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.566454 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.566460 | orchestrator | 2025-11-01 14:13:02.566466 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-11-01 14:13:02.566473 | orchestrator | Saturday 01 November 2025 14:03:43 +0000 (0:00:00.875) 0:02:40.220 ***** 2025-11-01 14:13:02.566479 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.566485 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.566491 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.566497 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.566516 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.566522 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.566528 | orchestrator | 2025-11-01 14:13:02.566534 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-11-01 14:13:02.566540 | orchestrator | Saturday 01 November 2025 14:03:44 +0000 (0:00:01.236) 0:02:41.456 ***** 2025-11-01 14:13:02.566546 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.566552 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.566558 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.566564 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.566570 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.566576 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.566582 | orchestrator | 2025-11-01 14:13:02.566589 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-11-01 14:13:02.566595 | orchestrator | Saturday 01 November 2025 14:03:45 +0000 (0:00:00.866) 0:02:42.323 ***** 2025-11-01 14:13:02.566601 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.566607 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.566613 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.566619 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.566625 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.566632 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.566638 | orchestrator | 2025-11-01 14:13:02.566644 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-11-01 14:13:02.566650 | orchestrator | Saturday 01 November 2025 14:03:48 +0000 (0:00:03.301) 0:02:45.625 ***** 2025-11-01 14:13:02.566656 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.566662 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.566668 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.566674 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.566680 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.566686 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.566692 | orchestrator | 2025-11-01 14:13:02.566698 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-11-01 14:13:02.566704 | orchestrator | Saturday 01 November 2025 14:03:49 +0000 (0:00:00.672) 0:02:46.298 ***** 2025-11-01 14:13:02.566711 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:13:02.566718 | orchestrator | 2025-11-01 14:13:02.566724 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-11-01 14:13:02.566730 | orchestrator | Saturday 01 November 2025 14:03:51 +0000 (0:00:01.537) 0:02:47.835 ***** 2025-11-01 14:13:02.566736 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.566742 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.566748 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.566754 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.566760 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.566766 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.566772 | orchestrator | 2025-11-01 14:13:02.566778 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-11-01 14:13:02.566784 | orchestrator | Saturday 01 November 2025 14:03:52 +0000 (0:00:01.191) 0:02:49.026 ***** 2025-11-01 14:13:02.566791 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.566797 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.566807 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.566813 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.566819 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.566825 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.566831 | orchestrator | 2025-11-01 14:13:02.566837 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-11-01 14:13:02.566844 | orchestrator | Saturday 01 November 2025 14:03:53 +0000 (0:00:01.232) 0:02:50.259 ***** 2025-11-01 14:13:02.566853 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.566859 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.566865 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.566871 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.566895 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.566906 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.566913 | orchestrator | 2025-11-01 14:13:02.566919 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-11-01 14:13:02.566925 | orchestrator | Saturday 01 November 2025 14:03:54 +0000 (0:00:01.168) 0:02:51.427 ***** 2025-11-01 14:13:02.566932 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.566938 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.566944 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.566950 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.566956 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.566962 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.566968 | orchestrator | 2025-11-01 14:13:02.566974 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-11-01 14:13:02.566980 | orchestrator | Saturday 01 November 2025 14:03:55 +0000 (0:00:01.037) 0:02:52.465 ***** 2025-11-01 14:13:02.566986 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.566993 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.566999 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.567005 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.567011 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.567017 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.567023 | orchestrator | 2025-11-01 14:13:02.567029 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-11-01 14:13:02.567035 | orchestrator | Saturday 01 November 2025 14:03:56 +0000 (0:00:01.342) 0:02:53.808 ***** 2025-11-01 14:13:02.567041 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.567047 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.567053 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.567059 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.567065 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.567071 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.567077 | orchestrator | 2025-11-01 14:13:02.567083 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-11-01 14:13:02.567090 | orchestrator | Saturday 01 November 2025 14:03:57 +0000 (0:00:00.768) 0:02:54.577 ***** 2025-11-01 14:13:02.567096 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.567113 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.567119 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.567125 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.567139 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.567145 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.567151 | orchestrator | 2025-11-01 14:13:02.567157 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-11-01 14:13:02.567164 | orchestrator | Saturday 01 November 2025 14:03:58 +0000 (0:00:01.012) 0:02:55.589 ***** 2025-11-01 14:13:02.567170 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.567176 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.567182 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.567188 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.567194 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.567205 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.567211 | orchestrator | 2025-11-01 14:13:02.567217 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-11-01 14:13:02.567223 | orchestrator | Saturday 01 November 2025 14:03:59 +0000 (0:00:00.968) 0:02:56.557 ***** 2025-11-01 14:13:02.567229 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.567235 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.567241 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.567248 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.567254 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.567260 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.567266 | orchestrator | 2025-11-01 14:13:02.567272 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-11-01 14:13:02.567278 | orchestrator | Saturday 01 November 2025 14:04:01 +0000 (0:00:01.900) 0:02:58.458 ***** 2025-11-01 14:13:02.567285 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:13:02.567291 | orchestrator | 2025-11-01 14:13:02.567297 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-11-01 14:13:02.567303 | orchestrator | Saturday 01 November 2025 14:04:03 +0000 (0:00:01.657) 0:03:00.116 ***** 2025-11-01 14:13:02.567309 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-11-01 14:13:02.567316 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-11-01 14:13:02.567322 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-11-01 14:13:02.567328 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-11-01 14:13:02.567334 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-11-01 14:13:02.567340 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-11-01 14:13:02.567346 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-11-01 14:13:02.567352 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-11-01 14:13:02.567358 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-11-01 14:13:02.567364 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-11-01 14:13:02.567370 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-11-01 14:13:02.567376 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-11-01 14:13:02.567382 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-11-01 14:13:02.567389 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-11-01 14:13:02.567395 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-11-01 14:13:02.567401 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-11-01 14:13:02.567407 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-11-01 14:13:02.567416 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-11-01 14:13:02.567423 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-11-01 14:13:02.567429 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-11-01 14:13:02.567439 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-11-01 14:13:02.567445 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-11-01 14:13:02.567452 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-11-01 14:13:02.567458 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-11-01 14:13:02.567464 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-11-01 14:13:02.567470 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-11-01 14:13:02.567476 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-11-01 14:13:02.567482 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-11-01 14:13:02.567488 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-11-01 14:13:02.567537 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-11-01 14:13:02.567544 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-11-01 14:13:02.567551 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-11-01 14:13:02.567557 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-11-01 14:13:02.567563 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-11-01 14:13:02.567569 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-11-01 14:13:02.567575 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-11-01 14:13:02.567581 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-11-01 14:13:02.567587 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-11-01 14:13:02.567593 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-11-01 14:13:02.567599 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-11-01 14:13:02.567605 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-11-01 14:13:02.567611 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-11-01 14:13:02.567617 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-11-01 14:13:02.567623 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-11-01 14:13:02.567629 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-11-01 14:13:02.567635 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-11-01 14:13:02.567641 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-11-01 14:13:02.567647 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-11-01 14:13:02.567653 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-11-01 14:13:02.567659 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-11-01 14:13:02.567665 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-11-01 14:13:02.567671 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-11-01 14:13:02.567678 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-11-01 14:13:02.567684 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-11-01 14:13:02.567690 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-11-01 14:13:02.567696 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-11-01 14:13:02.567702 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-11-01 14:13:02.567708 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-11-01 14:13:02.567714 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-11-01 14:13:02.567720 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-11-01 14:13:02.567726 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-11-01 14:13:02.567732 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-11-01 14:13:02.567738 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-11-01 14:13:02.567744 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-11-01 14:13:02.567750 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-11-01 14:13:02.567756 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-11-01 14:13:02.567762 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-11-01 14:13:02.567768 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-11-01 14:13:02.567774 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-11-01 14:13:02.567780 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-11-01 14:13:02.567791 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-11-01 14:13:02.567797 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-11-01 14:13:02.567804 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-11-01 14:13:02.567810 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-11-01 14:13:02.567816 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-11-01 14:13:02.567825 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-11-01 14:13:02.567831 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-11-01 14:13:02.567837 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-11-01 14:13:02.567848 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-11-01 14:13:02.567854 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-11-01 14:13:02.567861 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-11-01 14:13:02.567867 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-11-01 14:13:02.567872 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-11-01 14:13:02.567877 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-11-01 14:13:02.567883 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-11-01 14:13:02.567888 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-11-01 14:13:02.567893 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-11-01 14:13:02.567898 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-11-01 14:13:02.567904 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-11-01 14:13:02.567909 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-11-01 14:13:02.567914 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-11-01 14:13:02.567920 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-11-01 14:13:02.567925 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-11-01 14:13:02.567930 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-11-01 14:13:02.567936 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-11-01 14:13:02.567941 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-11-01 14:13:02.567946 | orchestrator | 2025-11-01 14:13:02.567952 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-11-01 14:13:02.567957 | orchestrator | Saturday 01 November 2025 14:04:10 +0000 (0:00:07.624) 0:03:07.741 ***** 2025-11-01 14:13:02.567962 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.567968 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.567973 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.567978 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:13:02.567984 | orchestrator | 2025-11-01 14:13:02.567989 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-11-01 14:13:02.567994 | orchestrator | Saturday 01 November 2025 14:04:12 +0000 (0:00:01.236) 0:03:08.977 ***** 2025-11-01 14:13:02.568000 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-11-01 14:13:02.568006 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-11-01 14:13:02.568011 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-11-01 14:13:02.568016 | orchestrator | 2025-11-01 14:13:02.568022 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-11-01 14:13:02.568031 | orchestrator | Saturday 01 November 2025 14:04:13 +0000 (0:00:01.264) 0:03:10.242 ***** 2025-11-01 14:13:02.568036 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-11-01 14:13:02.568041 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-11-01 14:13:02.568047 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-11-01 14:13:02.568052 | orchestrator | 2025-11-01 14:13:02.568058 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-11-01 14:13:02.568063 | orchestrator | Saturday 01 November 2025 14:04:14 +0000 (0:00:01.453) 0:03:11.696 ***** 2025-11-01 14:13:02.568068 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.568074 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.568079 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.568084 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.568089 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.568095 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.568100 | orchestrator | 2025-11-01 14:13:02.568105 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-11-01 14:13:02.568111 | orchestrator | Saturday 01 November 2025 14:04:15 +0000 (0:00:00.677) 0:03:12.374 ***** 2025-11-01 14:13:02.568116 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.568121 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.568127 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.568132 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.568137 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.568142 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.568147 | orchestrator | 2025-11-01 14:13:02.568153 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-11-01 14:13:02.568158 | orchestrator | Saturday 01 November 2025 14:04:16 +0000 (0:00:01.167) 0:03:13.541 ***** 2025-11-01 14:13:02.568163 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.568169 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.568174 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.568179 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.568184 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.568192 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.568198 | orchestrator | 2025-11-01 14:13:02.568203 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-11-01 14:13:02.568209 | orchestrator | Saturday 01 November 2025 14:04:17 +0000 (0:00:00.679) 0:03:14.221 ***** 2025-11-01 14:13:02.568218 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.568223 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.568228 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.568234 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.568239 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.568244 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.568249 | orchestrator | 2025-11-01 14:13:02.568255 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-11-01 14:13:02.568260 | orchestrator | Saturday 01 November 2025 14:04:18 +0000 (0:00:01.055) 0:03:15.276 ***** 2025-11-01 14:13:02.568265 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.568271 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.568276 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.568281 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.568286 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.568292 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.568297 | orchestrator | 2025-11-01 14:13:02.568302 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-11-01 14:13:02.568307 | orchestrator | Saturday 01 November 2025 14:04:19 +0000 (0:00:00.836) 0:03:16.113 ***** 2025-11-01 14:13:02.568317 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.568322 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.568327 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.568333 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.568338 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.568343 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.568348 | orchestrator | 2025-11-01 14:13:02.568354 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-11-01 14:13:02.568359 | orchestrator | Saturday 01 November 2025 14:04:20 +0000 (0:00:00.935) 0:03:17.049 ***** 2025-11-01 14:13:02.568364 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.568370 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.568375 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.568380 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.568385 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.568391 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.568396 | orchestrator | 2025-11-01 14:13:02.568401 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-11-01 14:13:02.568407 | orchestrator | Saturday 01 November 2025 14:04:21 +0000 (0:00:00.995) 0:03:18.044 ***** 2025-11-01 14:13:02.568412 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.568417 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.568422 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.568428 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.568433 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.568438 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.568443 | orchestrator | 2025-11-01 14:13:02.568449 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-11-01 14:13:02.568454 | orchestrator | Saturday 01 November 2025 14:04:22 +0000 (0:00:01.067) 0:03:19.112 ***** 2025-11-01 14:13:02.568459 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.568465 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.568470 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.568475 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.568481 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.568486 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.568491 | orchestrator | 2025-11-01 14:13:02.568497 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-11-01 14:13:02.568558 | orchestrator | Saturday 01 November 2025 14:04:25 +0000 (0:00:03.237) 0:03:22.350 ***** 2025-11-01 14:13:02.568564 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.568569 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.568574 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.568580 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.568585 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.568590 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.568596 | orchestrator | 2025-11-01 14:13:02.568601 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-11-01 14:13:02.568606 | orchestrator | Saturday 01 November 2025 14:04:26 +0000 (0:00:00.946) 0:03:23.297 ***** 2025-11-01 14:13:02.568612 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.568617 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.568622 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.568628 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.568633 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.568638 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.568644 | orchestrator | 2025-11-01 14:13:02.568649 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-11-01 14:13:02.568654 | orchestrator | Saturday 01 November 2025 14:04:27 +0000 (0:00:01.401) 0:03:24.699 ***** 2025-11-01 14:13:02.568660 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.568665 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.568674 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.568680 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.568685 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.568690 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.568696 | orchestrator | 2025-11-01 14:13:02.568701 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-11-01 14:13:02.568706 | orchestrator | Saturday 01 November 2025 14:04:28 +0000 (0:00:00.995) 0:03:25.694 ***** 2025-11-01 14:13:02.568712 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-11-01 14:13:02.568717 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-11-01 14:13:02.568728 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-11-01 14:13:02.568733 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.568739 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.568744 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.568749 | orchestrator | 2025-11-01 14:13:02.568758 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-11-01 14:13:02.568764 | orchestrator | Saturday 01 November 2025 14:04:29 +0000 (0:00:01.097) 0:03:26.792 ***** 2025-11-01 14:13:02.568770 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-11-01 14:13:02.568778 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-11-01 14:13:02.568784 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.568790 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-11-01 14:13:02.568795 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-11-01 14:13:02.568801 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-11-01 14:13:02.568807 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-11-01 14:13:02.568812 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.568817 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.568823 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.568828 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.568833 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.568838 | orchestrator | 2025-11-01 14:13:02.568844 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-11-01 14:13:02.568849 | orchestrator | Saturday 01 November 2025 14:04:31 +0000 (0:00:01.215) 0:03:28.007 ***** 2025-11-01 14:13:02.568860 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.568865 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.568870 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.568876 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.568881 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.568886 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.568891 | orchestrator | 2025-11-01 14:13:02.568897 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-11-01 14:13:02.568902 | orchestrator | Saturday 01 November 2025 14:04:31 +0000 (0:00:00.758) 0:03:28.765 ***** 2025-11-01 14:13:02.568907 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.568913 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.568918 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.568923 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.568929 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.568934 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.568939 | orchestrator | 2025-11-01 14:13:02.568945 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-11-01 14:13:02.568950 | orchestrator | Saturday 01 November 2025 14:04:32 +0000 (0:00:00.970) 0:03:29.736 ***** 2025-11-01 14:13:02.568955 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.568961 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.568966 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.568971 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.568977 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.568982 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.568987 | orchestrator | 2025-11-01 14:13:02.568993 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-11-01 14:13:02.568998 | orchestrator | Saturday 01 November 2025 14:04:34 +0000 (0:00:01.486) 0:03:31.223 ***** 2025-11-01 14:13:02.569003 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.569009 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.569014 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.569019 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.569024 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.569030 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.569035 | orchestrator | 2025-11-01 14:13:02.569043 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-11-01 14:13:02.569049 | orchestrator | Saturday 01 November 2025 14:04:35 +0000 (0:00:01.146) 0:03:32.369 ***** 2025-11-01 14:13:02.569054 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.569063 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.569069 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.569074 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.569079 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.569085 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.569090 | orchestrator | 2025-11-01 14:13:02.569095 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-11-01 14:13:02.569101 | orchestrator | Saturday 01 November 2025 14:04:36 +0000 (0:00:00.794) 0:03:33.164 ***** 2025-11-01 14:13:02.569106 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.569111 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.569117 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.569122 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.569127 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.569133 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.569138 | orchestrator | 2025-11-01 14:13:02.569143 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-11-01 14:13:02.569149 | orchestrator | Saturday 01 November 2025 14:04:37 +0000 (0:00:01.500) 0:03:34.664 ***** 2025-11-01 14:13:02.569154 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 14:13:02.569164 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 14:13:02.569169 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 14:13:02.569174 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.569180 | orchestrator | 2025-11-01 14:13:02.569185 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-11-01 14:13:02.569190 | orchestrator | Saturday 01 November 2025 14:04:38 +0000 (0:00:00.683) 0:03:35.347 ***** 2025-11-01 14:13:02.569196 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 14:13:02.569201 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 14:13:02.569206 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 14:13:02.569212 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.569217 | orchestrator | 2025-11-01 14:13:02.569222 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-11-01 14:13:02.569228 | orchestrator | Saturday 01 November 2025 14:04:38 +0000 (0:00:00.441) 0:03:35.789 ***** 2025-11-01 14:13:02.569233 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 14:13:02.569238 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 14:13:02.569244 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 14:13:02.569249 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.569254 | orchestrator | 2025-11-01 14:13:02.569260 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-11-01 14:13:02.569265 | orchestrator | Saturday 01 November 2025 14:04:39 +0000 (0:00:00.463) 0:03:36.252 ***** 2025-11-01 14:13:02.569270 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.569276 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.569281 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.569286 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.569292 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.569297 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.569302 | orchestrator | 2025-11-01 14:13:02.569308 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-11-01 14:13:02.569313 | orchestrator | Saturday 01 November 2025 14:04:40 +0000 (0:00:00.771) 0:03:37.024 ***** 2025-11-01 14:13:02.569318 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-11-01 14:13:02.569324 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-11-01 14:13:02.569329 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-11-01 14:13:02.569335 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-11-01 14:13:02.569340 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.569345 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-11-01 14:13:02.569351 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.569356 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-11-01 14:13:02.569361 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.569367 | orchestrator | 2025-11-01 14:13:02.569372 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-11-01 14:13:02.569377 | orchestrator | Saturday 01 November 2025 14:04:43 +0000 (0:00:03.462) 0:03:40.486 ***** 2025-11-01 14:13:02.569383 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:13:02.569388 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:13:02.569393 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:13:02.569398 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:02.569404 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:02.569409 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:02.569414 | orchestrator | 2025-11-01 14:13:02.569420 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-11-01 14:13:02.569425 | orchestrator | Saturday 01 November 2025 14:04:47 +0000 (0:00:03.588) 0:03:44.075 ***** 2025-11-01 14:13:02.569430 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:13:02.569436 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:13:02.569441 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:13:02.569450 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:02.569455 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:02.569461 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:02.569466 | orchestrator | 2025-11-01 14:13:02.569471 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-11-01 14:13:02.569477 | orchestrator | Saturday 01 November 2025 14:04:48 +0000 (0:00:01.283) 0:03:45.359 ***** 2025-11-01 14:13:02.569482 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.569487 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.569493 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.569506 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:13:02.569512 | orchestrator | 2025-11-01 14:13:02.569520 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-11-01 14:13:02.569526 | orchestrator | Saturday 01 November 2025 14:04:49 +0000 (0:00:01.236) 0:03:46.595 ***** 2025-11-01 14:13:02.569531 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.569536 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.569542 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.569547 | orchestrator | 2025-11-01 14:13:02.569556 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-11-01 14:13:02.569561 | orchestrator | Saturday 01 November 2025 14:04:50 +0000 (0:00:00.413) 0:03:47.008 ***** 2025-11-01 14:13:02.569567 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:02.569572 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:02.569577 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:02.569583 | orchestrator | 2025-11-01 14:13:02.569588 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-11-01 14:13:02.569593 | orchestrator | Saturday 01 November 2025 14:04:51 +0000 (0:00:01.577) 0:03:48.586 ***** 2025-11-01 14:13:02.569599 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-11-01 14:13:02.569604 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-11-01 14:13:02.569609 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-11-01 14:13:02.569614 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.569620 | orchestrator | 2025-11-01 14:13:02.569625 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-11-01 14:13:02.569630 | orchestrator | Saturday 01 November 2025 14:04:52 +0000 (0:00:00.723) 0:03:49.310 ***** 2025-11-01 14:13:02.569636 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.569641 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.569646 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.569652 | orchestrator | 2025-11-01 14:13:02.569657 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-11-01 14:13:02.569662 | orchestrator | Saturday 01 November 2025 14:04:52 +0000 (0:00:00.423) 0:03:49.733 ***** 2025-11-01 14:13:02.569668 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.569673 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.569678 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.569684 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:13:02.569689 | orchestrator | 2025-11-01 14:13:02.569695 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-11-01 14:13:02.569700 | orchestrator | Saturday 01 November 2025 14:04:54 +0000 (0:00:01.761) 0:03:51.494 ***** 2025-11-01 14:13:02.569705 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 14:13:02.569711 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 14:13:02.569716 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 14:13:02.569721 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.569727 | orchestrator | 2025-11-01 14:13:02.569732 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-11-01 14:13:02.569737 | orchestrator | Saturday 01 November 2025 14:04:55 +0000 (0:00:00.468) 0:03:51.963 ***** 2025-11-01 14:13:02.569746 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.569752 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.569757 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.569763 | orchestrator | 2025-11-01 14:13:02.569768 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-11-01 14:13:02.569773 | orchestrator | Saturday 01 November 2025 14:04:55 +0000 (0:00:00.452) 0:03:52.415 ***** 2025-11-01 14:13:02.569778 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.569784 | orchestrator | 2025-11-01 14:13:02.569789 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-11-01 14:13:02.569794 | orchestrator | Saturday 01 November 2025 14:04:55 +0000 (0:00:00.276) 0:03:52.692 ***** 2025-11-01 14:13:02.569800 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.569805 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.569810 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.569816 | orchestrator | 2025-11-01 14:13:02.569821 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-11-01 14:13:02.569826 | orchestrator | Saturday 01 November 2025 14:04:56 +0000 (0:00:00.358) 0:03:53.050 ***** 2025-11-01 14:13:02.569832 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.569837 | orchestrator | 2025-11-01 14:13:02.569842 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-11-01 14:13:02.569848 | orchestrator | Saturday 01 November 2025 14:04:56 +0000 (0:00:00.317) 0:03:53.368 ***** 2025-11-01 14:13:02.569853 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.569858 | orchestrator | 2025-11-01 14:13:02.569864 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-11-01 14:13:02.569869 | orchestrator | Saturday 01 November 2025 14:04:56 +0000 (0:00:00.252) 0:03:53.620 ***** 2025-11-01 14:13:02.569874 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.569880 | orchestrator | 2025-11-01 14:13:02.569885 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-11-01 14:13:02.569890 | orchestrator | Saturday 01 November 2025 14:04:56 +0000 (0:00:00.133) 0:03:53.754 ***** 2025-11-01 14:13:02.569896 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.569901 | orchestrator | 2025-11-01 14:13:02.569906 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-11-01 14:13:02.569912 | orchestrator | Saturday 01 November 2025 14:04:57 +0000 (0:00:00.848) 0:03:54.603 ***** 2025-11-01 14:13:02.569917 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.569922 | orchestrator | 2025-11-01 14:13:02.569928 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-11-01 14:13:02.569933 | orchestrator | Saturday 01 November 2025 14:04:58 +0000 (0:00:00.292) 0:03:54.895 ***** 2025-11-01 14:13:02.569938 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 14:13:02.569944 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 14:13:02.569949 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 14:13:02.569954 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.569960 | orchestrator | 2025-11-01 14:13:02.569968 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-11-01 14:13:02.569974 | orchestrator | Saturday 01 November 2025 14:04:58 +0000 (0:00:00.669) 0:03:55.565 ***** 2025-11-01 14:13:02.569979 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.569987 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.569993 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.569998 | orchestrator | 2025-11-01 14:13:02.570004 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-11-01 14:13:02.570009 | orchestrator | Saturday 01 November 2025 14:04:59 +0000 (0:00:00.446) 0:03:56.011 ***** 2025-11-01 14:13:02.570090 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.570098 | orchestrator | 2025-11-01 14:13:02.570104 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-11-01 14:13:02.570114 | orchestrator | Saturday 01 November 2025 14:04:59 +0000 (0:00:00.247) 0:03:56.259 ***** 2025-11-01 14:13:02.570119 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.570124 | orchestrator | 2025-11-01 14:13:02.570130 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-11-01 14:13:02.570135 | orchestrator | Saturday 01 November 2025 14:04:59 +0000 (0:00:00.292) 0:03:56.552 ***** 2025-11-01 14:13:02.570140 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.570146 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.570151 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.570157 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:13:02.570162 | orchestrator | 2025-11-01 14:13:02.570167 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-11-01 14:13:02.570173 | orchestrator | Saturday 01 November 2025 14:05:01 +0000 (0:00:01.326) 0:03:57.879 ***** 2025-11-01 14:13:02.570178 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.570183 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.570189 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.570194 | orchestrator | 2025-11-01 14:13:02.570199 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-11-01 14:13:02.570205 | orchestrator | Saturday 01 November 2025 14:05:01 +0000 (0:00:00.373) 0:03:58.252 ***** 2025-11-01 14:13:02.570210 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:13:02.570215 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:13:02.570221 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:13:02.570226 | orchestrator | 2025-11-01 14:13:02.570231 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-11-01 14:13:02.570237 | orchestrator | Saturday 01 November 2025 14:05:02 +0000 (0:00:01.232) 0:03:59.484 ***** 2025-11-01 14:13:02.570242 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 14:13:02.570247 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 14:13:02.570253 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 14:13:02.570258 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.570263 | orchestrator | 2025-11-01 14:13:02.570269 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-11-01 14:13:02.570274 | orchestrator | Saturday 01 November 2025 14:05:03 +0000 (0:00:00.929) 0:04:00.414 ***** 2025-11-01 14:13:02.570279 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.570285 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.570290 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.570295 | orchestrator | 2025-11-01 14:13:02.570301 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-11-01 14:13:02.570306 | orchestrator | Saturday 01 November 2025 14:05:04 +0000 (0:00:00.817) 0:04:01.232 ***** 2025-11-01 14:13:02.570311 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.570317 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.570322 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.570327 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:13:02.570333 | orchestrator | 2025-11-01 14:13:02.570338 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-11-01 14:13:02.570343 | orchestrator | Saturday 01 November 2025 14:05:05 +0000 (0:00:00.950) 0:04:02.182 ***** 2025-11-01 14:13:02.570349 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.570354 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.570359 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.570365 | orchestrator | 2025-11-01 14:13:02.570370 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-11-01 14:13:02.570375 | orchestrator | Saturday 01 November 2025 14:05:06 +0000 (0:00:00.717) 0:04:02.899 ***** 2025-11-01 14:13:02.570381 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:13:02.570386 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:13:02.570395 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:13:02.570400 | orchestrator | 2025-11-01 14:13:02.570406 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-11-01 14:13:02.570411 | orchestrator | Saturday 01 November 2025 14:05:07 +0000 (0:00:01.277) 0:04:04.177 ***** 2025-11-01 14:13:02.570416 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 14:13:02.570422 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 14:13:02.570427 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 14:13:02.570432 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.570437 | orchestrator | 2025-11-01 14:13:02.570443 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-11-01 14:13:02.570448 | orchestrator | Saturday 01 November 2025 14:05:08 +0000 (0:00:00.734) 0:04:04.911 ***** 2025-11-01 14:13:02.570453 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.570459 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.570464 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.570470 | orchestrator | 2025-11-01 14:13:02.570475 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-11-01 14:13:02.570480 | orchestrator | Saturday 01 November 2025 14:05:08 +0000 (0:00:00.372) 0:04:05.284 ***** 2025-11-01 14:13:02.570486 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.570491 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.570528 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.570535 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.570540 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.570546 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.570551 | orchestrator | 2025-11-01 14:13:02.570556 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-11-01 14:13:02.570588 | orchestrator | Saturday 01 November 2025 14:05:09 +0000 (0:00:00.932) 0:04:06.216 ***** 2025-11-01 14:13:02.570594 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.570600 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.570605 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.570610 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:13:02.570616 | orchestrator | 2025-11-01 14:13:02.570621 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-11-01 14:13:02.570627 | orchestrator | Saturday 01 November 2025 14:05:10 +0000 (0:00:00.990) 0:04:07.207 ***** 2025-11-01 14:13:02.570632 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.570637 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.570642 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.570648 | orchestrator | 2025-11-01 14:13:02.570653 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-11-01 14:13:02.570659 | orchestrator | Saturday 01 November 2025 14:05:11 +0000 (0:00:00.812) 0:04:08.019 ***** 2025-11-01 14:13:02.570664 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:02.570669 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:02.570674 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:02.570679 | orchestrator | 2025-11-01 14:13:02.570685 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-11-01 14:13:02.570690 | orchestrator | Saturday 01 November 2025 14:05:12 +0000 (0:00:01.417) 0:04:09.437 ***** 2025-11-01 14:13:02.570696 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-11-01 14:13:02.570701 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-11-01 14:13:02.570706 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-11-01 14:13:02.570711 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.570717 | orchestrator | 2025-11-01 14:13:02.570722 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-11-01 14:13:02.570727 | orchestrator | Saturday 01 November 2025 14:05:13 +0000 (0:00:00.758) 0:04:10.195 ***** 2025-11-01 14:13:02.570737 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.570743 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.570748 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.570753 | orchestrator | 2025-11-01 14:13:02.570759 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-11-01 14:13:02.570764 | orchestrator | 2025-11-01 14:13:02.570770 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-01 14:13:02.570775 | orchestrator | Saturday 01 November 2025 14:05:14 +0000 (0:00:00.779) 0:04:10.975 ***** 2025-11-01 14:13:02.570780 | orchestrator | included: /ansible/roles/ceph-handle2025-11-01 14:13:02 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:13:02.570786 | orchestrator | r/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:13:02.570791 | orchestrator | 2025-11-01 14:13:02.570797 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-01 14:13:02.570802 | orchestrator | Saturday 01 November 2025 14:05:15 +0000 (0:00:00.894) 0:04:11.870 ***** 2025-11-01 14:13:02.570807 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:13:02.570813 | orchestrator | 2025-11-01 14:13:02.570818 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-01 14:13:02.570823 | orchestrator | Saturday 01 November 2025 14:05:15 +0000 (0:00:00.651) 0:04:12.522 ***** 2025-11-01 14:13:02.570829 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.570834 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.570839 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.570845 | orchestrator | 2025-11-01 14:13:02.570850 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-01 14:13:02.570855 | orchestrator | Saturday 01 November 2025 14:05:16 +0000 (0:00:01.211) 0:04:13.734 ***** 2025-11-01 14:13:02.570861 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.570866 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.570871 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.570877 | orchestrator | 2025-11-01 14:13:02.570882 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-01 14:13:02.570887 | orchestrator | Saturday 01 November 2025 14:05:17 +0000 (0:00:00.348) 0:04:14.082 ***** 2025-11-01 14:13:02.570893 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.570898 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.570903 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.570908 | orchestrator | 2025-11-01 14:13:02.570914 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-01 14:13:02.570919 | orchestrator | Saturday 01 November 2025 14:05:17 +0000 (0:00:00.325) 0:04:14.407 ***** 2025-11-01 14:13:02.570924 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.570930 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.570935 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.570940 | orchestrator | 2025-11-01 14:13:02.570946 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-01 14:13:02.570951 | orchestrator | Saturday 01 November 2025 14:05:17 +0000 (0:00:00.318) 0:04:14.726 ***** 2025-11-01 14:13:02.570956 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.570962 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.570967 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.570972 | orchestrator | 2025-11-01 14:13:02.570978 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-01 14:13:02.570983 | orchestrator | Saturday 01 November 2025 14:05:19 +0000 (0:00:01.140) 0:04:15.866 ***** 2025-11-01 14:13:02.570988 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.570994 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.571002 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.571008 | orchestrator | 2025-11-01 14:13:02.571013 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-01 14:13:02.571022 | orchestrator | Saturday 01 November 2025 14:05:19 +0000 (0:00:00.349) 0:04:16.215 ***** 2025-11-01 14:13:02.571045 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.571052 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.571057 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.571062 | orchestrator | 2025-11-01 14:13:02.571068 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-01 14:13:02.571073 | orchestrator | Saturday 01 November 2025 14:05:19 +0000 (0:00:00.334) 0:04:16.549 ***** 2025-11-01 14:13:02.571078 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.571084 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.571089 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.571094 | orchestrator | 2025-11-01 14:13:02.571100 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-01 14:13:02.571105 | orchestrator | Saturday 01 November 2025 14:05:20 +0000 (0:00:00.788) 0:04:17.338 ***** 2025-11-01 14:13:02.571110 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.571115 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.571121 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.571126 | orchestrator | 2025-11-01 14:13:02.571131 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-01 14:13:02.571137 | orchestrator | Saturday 01 November 2025 14:05:21 +0000 (0:00:01.207) 0:04:18.545 ***** 2025-11-01 14:13:02.571142 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.571147 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.571153 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.571158 | orchestrator | 2025-11-01 14:13:02.571163 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-01 14:13:02.571169 | orchestrator | Saturday 01 November 2025 14:05:22 +0000 (0:00:00.350) 0:04:18.895 ***** 2025-11-01 14:13:02.571174 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.571179 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.571184 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.571190 | orchestrator | 2025-11-01 14:13:02.571195 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-01 14:13:02.571201 | orchestrator | Saturday 01 November 2025 14:05:22 +0000 (0:00:00.414) 0:04:19.310 ***** 2025-11-01 14:13:02.571206 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.571211 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.571217 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.571222 | orchestrator | 2025-11-01 14:13:02.571227 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-01 14:13:02.571232 | orchestrator | Saturday 01 November 2025 14:05:22 +0000 (0:00:00.452) 0:04:19.762 ***** 2025-11-01 14:13:02.571238 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.571243 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.571248 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.571254 | orchestrator | 2025-11-01 14:13:02.571259 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-01 14:13:02.571264 | orchestrator | Saturday 01 November 2025 14:05:23 +0000 (0:00:00.361) 0:04:20.123 ***** 2025-11-01 14:13:02.571270 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.571275 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.571280 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.571285 | orchestrator | 2025-11-01 14:13:02.571291 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-01 14:13:02.571296 | orchestrator | Saturday 01 November 2025 14:05:24 +0000 (0:00:00.743) 0:04:20.867 ***** 2025-11-01 14:13:02.571301 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.571307 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.571312 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.571317 | orchestrator | 2025-11-01 14:13:02.571323 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-01 14:13:02.571328 | orchestrator | Saturday 01 November 2025 14:05:24 +0000 (0:00:00.375) 0:04:21.242 ***** 2025-11-01 14:13:02.571337 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.571342 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.571348 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.571353 | orchestrator | 2025-11-01 14:13:02.571358 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-01 14:13:02.571364 | orchestrator | Saturday 01 November 2025 14:05:24 +0000 (0:00:00.386) 0:04:21.629 ***** 2025-11-01 14:13:02.571369 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.571374 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.571380 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.571385 | orchestrator | 2025-11-01 14:13:02.571390 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-01 14:13:02.571396 | orchestrator | Saturday 01 November 2025 14:05:25 +0000 (0:00:00.512) 0:04:22.141 ***** 2025-11-01 14:13:02.571401 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.571406 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.571411 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.571417 | orchestrator | 2025-11-01 14:13:02.571422 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-01 14:13:02.571427 | orchestrator | Saturday 01 November 2025 14:05:26 +0000 (0:00:00.940) 0:04:23.082 ***** 2025-11-01 14:13:02.571433 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.571438 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.571443 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.571448 | orchestrator | 2025-11-01 14:13:02.571454 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-11-01 14:13:02.571459 | orchestrator | Saturday 01 November 2025 14:05:26 +0000 (0:00:00.655) 0:04:23.738 ***** 2025-11-01 14:13:02.571464 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.571470 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.571475 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.571480 | orchestrator | 2025-11-01 14:13:02.571486 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-11-01 14:13:02.571491 | orchestrator | Saturday 01 November 2025 14:05:27 +0000 (0:00:00.384) 0:04:24.123 ***** 2025-11-01 14:13:02.571496 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:13:02.571532 | orchestrator | 2025-11-01 14:13:02.571543 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-11-01 14:13:02.571549 | orchestrator | Saturday 01 November 2025 14:05:28 +0000 (0:00:01.070) 0:04:25.193 ***** 2025-11-01 14:13:02.571555 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.571560 | orchestrator | 2025-11-01 14:13:02.571584 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-11-01 14:13:02.571589 | orchestrator | Saturday 01 November 2025 14:05:28 +0000 (0:00:00.166) 0:04:25.359 ***** 2025-11-01 14:13:02.571594 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-11-01 14:13:02.571599 | orchestrator | 2025-11-01 14:13:02.571604 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-11-01 14:13:02.571609 | orchestrator | Saturday 01 November 2025 14:05:29 +0000 (0:00:01.332) 0:04:26.692 ***** 2025-11-01 14:13:02.571613 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.571618 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.571623 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.571627 | orchestrator | 2025-11-01 14:13:02.571632 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-11-01 14:13:02.571637 | orchestrator | Saturday 01 November 2025 14:05:30 +0000 (0:00:00.552) 0:04:27.245 ***** 2025-11-01 14:13:02.571642 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.571647 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.571651 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.571656 | orchestrator | 2025-11-01 14:13:02.571661 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-11-01 14:13:02.571665 | orchestrator | Saturday 01 November 2025 14:05:30 +0000 (0:00:00.492) 0:04:27.738 ***** 2025-11-01 14:13:02.571675 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:02.571680 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:02.571685 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:02.571690 | orchestrator | 2025-11-01 14:13:02.571695 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-11-01 14:13:02.571699 | orchestrator | Saturday 01 November 2025 14:05:32 +0000 (0:00:01.572) 0:04:29.310 ***** 2025-11-01 14:13:02.571704 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:02.571709 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:02.571714 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:02.571718 | orchestrator | 2025-11-01 14:13:02.571723 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-11-01 14:13:02.571728 | orchestrator | Saturday 01 November 2025 14:05:33 +0000 (0:00:00.939) 0:04:30.250 ***** 2025-11-01 14:13:02.571733 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:02.571737 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:02.571742 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:02.571747 | orchestrator | 2025-11-01 14:13:02.571752 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-11-01 14:13:02.571756 | orchestrator | Saturday 01 November 2025 14:05:34 +0000 (0:00:00.848) 0:04:31.098 ***** 2025-11-01 14:13:02.571761 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.571766 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.571770 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.571775 | orchestrator | 2025-11-01 14:13:02.571780 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-11-01 14:13:02.571785 | orchestrator | Saturday 01 November 2025 14:05:35 +0000 (0:00:00.904) 0:04:32.003 ***** 2025-11-01 14:13:02.571790 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:02.571794 | orchestrator | 2025-11-01 14:13:02.571799 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-11-01 14:13:02.571804 | orchestrator | Saturday 01 November 2025 14:05:37 +0000 (0:00:02.208) 0:04:34.212 ***** 2025-11-01 14:13:02.571809 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.571813 | orchestrator | 2025-11-01 14:13:02.571818 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-11-01 14:13:02.571823 | orchestrator | Saturday 01 November 2025 14:05:38 +0000 (0:00:00.849) 0:04:35.061 ***** 2025-11-01 14:13:02.571828 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-01 14:13:02.571832 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:13:02.571837 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:13:02.571842 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-01 14:13:02.571847 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-11-01 14:13:02.571851 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-01 14:13:02.571856 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-01 14:13:02.571861 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-11-01 14:13:02.571866 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-01 14:13:02.571870 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-11-01 14:13:02.571875 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-11-01 14:13:02.571880 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-11-01 14:13:02.571885 | orchestrator | 2025-11-01 14:13:02.571889 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-11-01 14:13:02.571894 | orchestrator | Saturday 01 November 2025 14:05:42 +0000 (0:00:04.180) 0:04:39.241 ***** 2025-11-01 14:13:02.571899 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:02.571904 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:02.571908 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:02.571913 | orchestrator | 2025-11-01 14:13:02.571918 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-11-01 14:13:02.571927 | orchestrator | Saturday 01 November 2025 14:05:44 +0000 (0:00:01.763) 0:04:41.005 ***** 2025-11-01 14:13:02.571931 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.571936 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.571941 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.571946 | orchestrator | 2025-11-01 14:13:02.571950 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-11-01 14:13:02.571955 | orchestrator | Saturday 01 November 2025 14:05:44 +0000 (0:00:00.437) 0:04:41.443 ***** 2025-11-01 14:13:02.571960 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.571965 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.571972 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.571977 | orchestrator | 2025-11-01 14:13:02.571982 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-11-01 14:13:02.571987 | orchestrator | Saturday 01 November 2025 14:05:45 +0000 (0:00:00.756) 0:04:42.199 ***** 2025-11-01 14:13:02.572006 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:02.572012 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:02.572017 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:02.572021 | orchestrator | 2025-11-01 14:13:02.572026 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-11-01 14:13:02.572031 | orchestrator | Saturday 01 November 2025 14:05:47 +0000 (0:00:02.078) 0:04:44.278 ***** 2025-11-01 14:13:02.572036 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:02.572040 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:02.572045 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:02.572050 | orchestrator | 2025-11-01 14:13:02.572054 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-11-01 14:13:02.572059 | orchestrator | Saturday 01 November 2025 14:05:49 +0000 (0:00:01.802) 0:04:46.081 ***** 2025-11-01 14:13:02.572064 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.572069 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.572073 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.572078 | orchestrator | 2025-11-01 14:13:02.572083 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-11-01 14:13:02.572087 | orchestrator | Saturday 01 November 2025 14:05:49 +0000 (0:00:00.505) 0:04:46.587 ***** 2025-11-01 14:13:02.572092 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-1, testbed-node-0, testbed-node-2 2025-11-01 14:13:02.572097 | orchestrator | 2025-11-01 14:13:02.572102 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-11-01 14:13:02.572106 | orchestrator | Saturday 01 November 2025 14:05:51 +0000 (0:00:01.581) 0:04:48.169 ***** 2025-11-01 14:13:02.572111 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.572116 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.572121 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.572125 | orchestrator | 2025-11-01 14:13:02.572130 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-11-01 14:13:02.572135 | orchestrator | Saturday 01 November 2025 14:05:51 +0000 (0:00:00.390) 0:04:48.560 ***** 2025-11-01 14:13:02.572139 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.572144 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.572149 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.572154 | orchestrator | 2025-11-01 14:13:02.572158 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-11-01 14:13:02.572163 | orchestrator | Saturday 01 November 2025 14:05:52 +0000 (0:00:00.317) 0:04:48.877 ***** 2025-11-01 14:13:02.572168 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:13:02.572172 | orchestrator | 2025-11-01 14:13:02.572177 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-11-01 14:13:02.572182 | orchestrator | Saturday 01 November 2025 14:05:52 +0000 (0:00:00.826) 0:04:49.704 ***** 2025-11-01 14:13:02.572190 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:02.572195 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:02.572200 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:02.572205 | orchestrator | 2025-11-01 14:13:02.572209 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-11-01 14:13:02.572214 | orchestrator | Saturday 01 November 2025 14:05:54 +0000 (0:00:01.573) 0:04:51.277 ***** 2025-11-01 14:13:02.572219 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:02.572224 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:02.572228 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:02.572233 | orchestrator | 2025-11-01 14:13:02.572238 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-11-01 14:13:02.572243 | orchestrator | Saturday 01 November 2025 14:05:55 +0000 (0:00:01.277) 0:04:52.554 ***** 2025-11-01 14:13:02.572247 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:02.572252 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:02.572257 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:02.572261 | orchestrator | 2025-11-01 14:13:02.572266 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-11-01 14:13:02.572271 | orchestrator | Saturday 01 November 2025 14:05:57 +0000 (0:00:01.758) 0:04:54.313 ***** 2025-11-01 14:13:02.572275 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:02.572280 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:02.572285 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:02.572290 | orchestrator | 2025-11-01 14:13:02.572294 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-11-01 14:13:02.572299 | orchestrator | Saturday 01 November 2025 14:05:59 +0000 (0:00:02.216) 0:04:56.530 ***** 2025-11-01 14:13:02.572304 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:13:02.572309 | orchestrator | 2025-11-01 14:13:02.572313 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-11-01 14:13:02.572318 | orchestrator | Saturday 01 November 2025 14:06:00 +0000 (0:00:00.590) 0:04:57.121 ***** 2025-11-01 14:13:02.572323 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-11-01 14:13:02.572328 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.572332 | orchestrator | 2025-11-01 14:13:02.572337 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-11-01 14:13:02.572342 | orchestrator | Saturday 01 November 2025 14:06:22 +0000 (0:00:22.141) 0:05:19.262 ***** 2025-11-01 14:13:02.572346 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.572351 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.572356 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.572361 | orchestrator | 2025-11-01 14:13:02.572365 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-11-01 14:13:02.572370 | orchestrator | Saturday 01 November 2025 14:06:31 +0000 (0:00:09.405) 0:05:28.668 ***** 2025-11-01 14:13:02.572378 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.572383 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.572388 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.572392 | orchestrator | 2025-11-01 14:13:02.572397 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-11-01 14:13:02.572416 | orchestrator | Saturday 01 November 2025 14:06:32 +0000 (0:00:00.576) 0:05:29.244 ***** 2025-11-01 14:13:02.572423 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54c93fdd0e1956cf9a86e072794c4dd5a5e17aa2'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-11-01 14:13:02.572430 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54c93fdd0e1956cf9a86e072794c4dd5a5e17aa2'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-11-01 14:13:02.572440 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54c93fdd0e1956cf9a86e072794c4dd5a5e17aa2'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-11-01 14:13:02.572446 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54c93fdd0e1956cf9a86e072794c4dd5a5e17aa2'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-11-01 14:13:02.572451 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54c93fdd0e1956cf9a86e072794c4dd5a5e17aa2'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-11-01 14:13:02.572457 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54c93fdd0e1956cf9a86e072794c4dd5a5e17aa2'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__54c93fdd0e1956cf9a86e072794c4dd5a5e17aa2'}])  2025-11-01 14:13:02.572463 | orchestrator | 2025-11-01 14:13:02.572467 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-11-01 14:13:02.572472 | orchestrator | Saturday 01 November 2025 14:06:48 +0000 (0:00:15.720) 0:05:44.965 ***** 2025-11-01 14:13:02.572477 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.572482 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.572486 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.572491 | orchestrator | 2025-11-01 14:13:02.572496 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-11-01 14:13:02.572512 | orchestrator | Saturday 01 November 2025 14:06:48 +0000 (0:00:00.360) 0:05:45.325 ***** 2025-11-01 14:13:02.572517 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:13:02.572521 | orchestrator | 2025-11-01 14:13:02.572526 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-11-01 14:13:02.572531 | orchestrator | Saturday 01 November 2025 14:06:49 +0000 (0:00:00.904) 0:05:46.230 ***** 2025-11-01 14:13:02.572536 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.572541 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.572545 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.572550 | orchestrator | 2025-11-01 14:13:02.572555 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-11-01 14:13:02.572560 | orchestrator | Saturday 01 November 2025 14:06:49 +0000 (0:00:00.354) 0:05:46.584 ***** 2025-11-01 14:13:02.572565 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.572569 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.572574 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.572579 | orchestrator | 2025-11-01 14:13:02.572584 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-11-01 14:13:02.572588 | orchestrator | Saturday 01 November 2025 14:06:50 +0000 (0:00:00.363) 0:05:46.948 ***** 2025-11-01 14:13:02.572593 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-11-01 14:13:02.572598 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-11-01 14:13:02.572606 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-11-01 14:13:02.572611 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.572615 | orchestrator | 2025-11-01 14:13:02.572623 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-11-01 14:13:02.572628 | orchestrator | Saturday 01 November 2025 14:06:51 +0000 (0:00:00.948) 0:05:47.897 ***** 2025-11-01 14:13:02.572633 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.572653 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.572658 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.572663 | orchestrator | 2025-11-01 14:13:02.572668 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-11-01 14:13:02.572672 | orchestrator | 2025-11-01 14:13:02.572677 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-01 14:13:02.572682 | orchestrator | Saturday 01 November 2025 14:06:51 +0000 (0:00:00.852) 0:05:48.750 ***** 2025-11-01 14:13:02.572687 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:13:02.572692 | orchestrator | 2025-11-01 14:13:02.572696 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-01 14:13:02.572701 | orchestrator | Saturday 01 November 2025 14:06:52 +0000 (0:00:00.509) 0:05:49.259 ***** 2025-11-01 14:13:02.572706 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:13:02.572711 | orchestrator | 2025-11-01 14:13:02.572715 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-01 14:13:02.572720 | orchestrator | Saturday 01 November 2025 14:06:53 +0000 (0:00:00.872) 0:05:50.132 ***** 2025-11-01 14:13:02.572725 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.572729 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.572734 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.572739 | orchestrator | 2025-11-01 14:13:02.572744 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-01 14:13:02.572748 | orchestrator | Saturday 01 November 2025 14:06:54 +0000 (0:00:00.737) 0:05:50.870 ***** 2025-11-01 14:13:02.572753 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.572758 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.572762 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.572767 | orchestrator | 2025-11-01 14:13:02.572772 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-01 14:13:02.572777 | orchestrator | Saturday 01 November 2025 14:06:54 +0000 (0:00:00.327) 0:05:51.197 ***** 2025-11-01 14:13:02.572781 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.572786 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.572791 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.572795 | orchestrator | 2025-11-01 14:13:02.572800 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-01 14:13:02.572805 | orchestrator | Saturday 01 November 2025 14:06:55 +0000 (0:00:00.619) 0:05:51.816 ***** 2025-11-01 14:13:02.572810 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.572814 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.572819 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.572824 | orchestrator | 2025-11-01 14:13:02.572828 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-01 14:13:02.572833 | orchestrator | Saturday 01 November 2025 14:06:55 +0000 (0:00:00.321) 0:05:52.138 ***** 2025-11-01 14:13:02.572838 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.572843 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.572847 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.572852 | orchestrator | 2025-11-01 14:13:02.572857 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-01 14:13:02.572862 | orchestrator | Saturday 01 November 2025 14:06:56 +0000 (0:00:00.773) 0:05:52.911 ***** 2025-11-01 14:13:02.572866 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.572875 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.572880 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.572884 | orchestrator | 2025-11-01 14:13:02.572889 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-01 14:13:02.572894 | orchestrator | Saturday 01 November 2025 14:06:56 +0000 (0:00:00.311) 0:05:53.222 ***** 2025-11-01 14:13:02.572898 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.572903 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.572908 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.572913 | orchestrator | 2025-11-01 14:13:02.572917 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-01 14:13:02.572922 | orchestrator | Saturday 01 November 2025 14:06:57 +0000 (0:00:00.607) 0:05:53.830 ***** 2025-11-01 14:13:02.572927 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.572931 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.572936 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.572941 | orchestrator | 2025-11-01 14:13:02.572945 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-01 14:13:02.572950 | orchestrator | Saturday 01 November 2025 14:06:57 +0000 (0:00:00.747) 0:05:54.577 ***** 2025-11-01 14:13:02.572955 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.572960 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.572964 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.572969 | orchestrator | 2025-11-01 14:13:02.572974 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-01 14:13:02.572978 | orchestrator | Saturday 01 November 2025 14:06:58 +0000 (0:00:00.747) 0:05:55.325 ***** 2025-11-01 14:13:02.572983 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.572988 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.572993 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.572997 | orchestrator | 2025-11-01 14:13:02.573002 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-01 14:13:02.573007 | orchestrator | Saturday 01 November 2025 14:06:58 +0000 (0:00:00.303) 0:05:55.629 ***** 2025-11-01 14:13:02.573012 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.573016 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.573021 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.573026 | orchestrator | 2025-11-01 14:13:02.573031 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-01 14:13:02.573035 | orchestrator | Saturday 01 November 2025 14:06:59 +0000 (0:00:00.345) 0:05:55.975 ***** 2025-11-01 14:13:02.573040 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.573047 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.573052 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.573057 | orchestrator | 2025-11-01 14:13:02.573062 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-01 14:13:02.573080 | orchestrator | Saturday 01 November 2025 14:06:59 +0000 (0:00:00.610) 0:05:56.586 ***** 2025-11-01 14:13:02.573086 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.573091 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.573096 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.573100 | orchestrator | 2025-11-01 14:13:02.573105 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-01 14:13:02.573110 | orchestrator | Saturday 01 November 2025 14:07:00 +0000 (0:00:00.364) 0:05:56.951 ***** 2025-11-01 14:13:02.573115 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.573119 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.573124 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.573129 | orchestrator | 2025-11-01 14:13:02.573134 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-01 14:13:02.573138 | orchestrator | Saturday 01 November 2025 14:07:00 +0000 (0:00:00.356) 0:05:57.307 ***** 2025-11-01 14:13:02.573143 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.573148 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.573156 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.573161 | orchestrator | 2025-11-01 14:13:02.573165 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-01 14:13:02.573170 | orchestrator | Saturday 01 November 2025 14:07:00 +0000 (0:00:00.361) 0:05:57.669 ***** 2025-11-01 14:13:02.573175 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.573180 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.573184 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.573189 | orchestrator | 2025-11-01 14:13:02.573194 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-01 14:13:02.573198 | orchestrator | Saturday 01 November 2025 14:07:01 +0000 (0:00:00.612) 0:05:58.282 ***** 2025-11-01 14:13:02.573203 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.573208 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.573213 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.573217 | orchestrator | 2025-11-01 14:13:02.573222 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-01 14:13:02.573227 | orchestrator | Saturday 01 November 2025 14:07:01 +0000 (0:00:00.338) 0:05:58.621 ***** 2025-11-01 14:13:02.573232 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.573236 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.573241 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.573246 | orchestrator | 2025-11-01 14:13:02.573251 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-01 14:13:02.573255 | orchestrator | Saturday 01 November 2025 14:07:02 +0000 (0:00:00.342) 0:05:58.963 ***** 2025-11-01 14:13:02.573260 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.573265 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.573270 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.573274 | orchestrator | 2025-11-01 14:13:02.573279 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-11-01 14:13:02.573284 | orchestrator | Saturday 01 November 2025 14:07:02 +0000 (0:00:00.798) 0:05:59.762 ***** 2025-11-01 14:13:02.573289 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-11-01 14:13:02.573294 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-01 14:13:02.573298 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-01 14:13:02.573303 | orchestrator | 2025-11-01 14:13:02.573308 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-11-01 14:13:02.573312 | orchestrator | Saturday 01 November 2025 14:07:03 +0000 (0:00:00.683) 0:06:00.446 ***** 2025-11-01 14:13:02.573317 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:13:02.573322 | orchestrator | 2025-11-01 14:13:02.573327 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-11-01 14:13:02.573331 | orchestrator | Saturday 01 November 2025 14:07:04 +0000 (0:00:00.559) 0:06:01.005 ***** 2025-11-01 14:13:02.573336 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:02.573341 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:02.573346 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:02.573350 | orchestrator | 2025-11-01 14:13:02.573355 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-11-01 14:13:02.573360 | orchestrator | Saturday 01 November 2025 14:07:04 +0000 (0:00:00.722) 0:06:01.728 ***** 2025-11-01 14:13:02.573364 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.573369 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.573374 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.573378 | orchestrator | 2025-11-01 14:13:02.573383 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-11-01 14:13:02.573388 | orchestrator | Saturday 01 November 2025 14:07:05 +0000 (0:00:00.656) 0:06:02.384 ***** 2025-11-01 14:13:02.573393 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-01 14:13:02.573397 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-01 14:13:02.573405 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-01 14:13:02.573410 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-11-01 14:13:02.573415 | orchestrator | 2025-11-01 14:13:02.573420 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-11-01 14:13:02.573424 | orchestrator | Saturday 01 November 2025 14:07:17 +0000 (0:00:11.503) 0:06:13.888 ***** 2025-11-01 14:13:02.573429 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.573434 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.573438 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.573443 | orchestrator | 2025-11-01 14:13:02.573448 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-11-01 14:13:02.573453 | orchestrator | Saturday 01 November 2025 14:07:17 +0000 (0:00:00.360) 0:06:14.249 ***** 2025-11-01 14:13:02.573457 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-11-01 14:13:02.573465 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-11-01 14:13:02.573470 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-11-01 14:13:02.573474 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-11-01 14:13:02.573479 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:13:02.573498 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:13:02.573547 | orchestrator | 2025-11-01 14:13:02.573552 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-11-01 14:13:02.573557 | orchestrator | Saturday 01 November 2025 14:07:19 +0000 (0:00:02.467) 0:06:16.716 ***** 2025-11-01 14:13:02.573562 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-11-01 14:13:02.573567 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-11-01 14:13:02.573571 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-11-01 14:13:02.573576 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-11-01 14:13:02.573581 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-01 14:13:02.573585 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-11-01 14:13:02.573590 | orchestrator | 2025-11-01 14:13:02.573595 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-11-01 14:13:02.573600 | orchestrator | Saturday 01 November 2025 14:07:21 +0000 (0:00:01.393) 0:06:18.110 ***** 2025-11-01 14:13:02.573604 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.573609 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.573614 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.573619 | orchestrator | 2025-11-01 14:13:02.573623 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-11-01 14:13:02.573628 | orchestrator | Saturday 01 November 2025 14:07:22 +0000 (0:00:01.034) 0:06:19.145 ***** 2025-11-01 14:13:02.573633 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.573638 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.573642 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.573647 | orchestrator | 2025-11-01 14:13:02.573652 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-11-01 14:13:02.573656 | orchestrator | Saturday 01 November 2025 14:07:22 +0000 (0:00:00.304) 0:06:19.449 ***** 2025-11-01 14:13:02.573661 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.573666 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.573671 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.573675 | orchestrator | 2025-11-01 14:13:02.573680 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-11-01 14:13:02.573685 | orchestrator | Saturday 01 November 2025 14:07:22 +0000 (0:00:00.348) 0:06:19.797 ***** 2025-11-01 14:13:02.573690 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:13:02.573694 | orchestrator | 2025-11-01 14:13:02.573699 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-11-01 14:13:02.573704 | orchestrator | Saturday 01 November 2025 14:07:23 +0000 (0:00:00.826) 0:06:20.624 ***** 2025-11-01 14:13:02.573714 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.573719 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.573724 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.573729 | orchestrator | 2025-11-01 14:13:02.573733 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-11-01 14:13:02.573738 | orchestrator | Saturday 01 November 2025 14:07:24 +0000 (0:00:00.339) 0:06:20.963 ***** 2025-11-01 14:13:02.573743 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.573748 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.573752 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.573757 | orchestrator | 2025-11-01 14:13:02.573762 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-11-01 14:13:02.573766 | orchestrator | Saturday 01 November 2025 14:07:24 +0000 (0:00:00.345) 0:06:21.309 ***** 2025-11-01 14:13:02.573771 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:13:02.573776 | orchestrator | 2025-11-01 14:13:02.573781 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-11-01 14:13:02.573785 | orchestrator | Saturday 01 November 2025 14:07:25 +0000 (0:00:00.787) 0:06:22.097 ***** 2025-11-01 14:13:02.573790 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:02.573795 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:02.573800 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:02.573804 | orchestrator | 2025-11-01 14:13:02.573809 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-11-01 14:13:02.573814 | orchestrator | Saturday 01 November 2025 14:07:26 +0000 (0:00:01.312) 0:06:23.409 ***** 2025-11-01 14:13:02.573818 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:02.573823 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:02.573828 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:02.573833 | orchestrator | 2025-11-01 14:13:02.573837 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-11-01 14:13:02.573842 | orchestrator | Saturday 01 November 2025 14:07:27 +0000 (0:00:01.197) 0:06:24.606 ***** 2025-11-01 14:13:02.573847 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:02.573851 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:02.573856 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:02.573861 | orchestrator | 2025-11-01 14:13:02.573866 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-11-01 14:13:02.573870 | orchestrator | Saturday 01 November 2025 14:07:29 +0000 (0:00:01.803) 0:06:26.410 ***** 2025-11-01 14:13:02.573875 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:02.573880 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:02.573885 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:02.573890 | orchestrator | 2025-11-01 14:13:02.573894 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-11-01 14:13:02.573899 | orchestrator | Saturday 01 November 2025 14:07:31 +0000 (0:00:02.339) 0:06:28.749 ***** 2025-11-01 14:13:02.573904 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.573908 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.573913 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-11-01 14:13:02.573918 | orchestrator | 2025-11-01 14:13:02.573927 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-11-01 14:13:02.573932 | orchestrator | Saturday 01 November 2025 14:07:32 +0000 (0:00:00.425) 0:06:29.175 ***** 2025-11-01 14:13:02.573952 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-11-01 14:13:02.573957 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-11-01 14:13:02.573962 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-11-01 14:13:02.573967 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-11-01 14:13:02.573976 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-11-01 14:13:02.573981 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-11-01 14:13:02.573986 | orchestrator | 2025-11-01 14:13:02.573990 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-11-01 14:13:02.573995 | orchestrator | Saturday 01 November 2025 14:08:02 +0000 (0:00:30.526) 0:06:59.701 ***** 2025-11-01 14:13:02.574000 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-11-01 14:13:02.574005 | orchestrator | 2025-11-01 14:13:02.574009 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-11-01 14:13:02.574041 | orchestrator | Saturday 01 November 2025 14:08:04 +0000 (0:00:01.365) 0:07:01.067 ***** 2025-11-01 14:13:02.574047 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.574052 | orchestrator | 2025-11-01 14:13:02.574057 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-11-01 14:13:02.574061 | orchestrator | Saturday 01 November 2025 14:08:04 +0000 (0:00:00.339) 0:07:01.406 ***** 2025-11-01 14:13:02.574066 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.574071 | orchestrator | 2025-11-01 14:13:02.574075 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-11-01 14:13:02.574080 | orchestrator | Saturday 01 November 2025 14:08:04 +0000 (0:00:00.151) 0:07:01.558 ***** 2025-11-01 14:13:02.574085 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-11-01 14:13:02.574090 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-11-01 14:13:02.574095 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-11-01 14:13:02.574099 | orchestrator | 2025-11-01 14:13:02.574104 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-11-01 14:13:02.574109 | orchestrator | Saturday 01 November 2025 14:08:11 +0000 (0:00:06.939) 0:07:08.497 ***** 2025-11-01 14:13:02.574113 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-11-01 14:13:02.574118 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-11-01 14:13:02.574123 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-11-01 14:13:02.574128 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-11-01 14:13:02.574132 | orchestrator | 2025-11-01 14:13:02.574137 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-11-01 14:13:02.574142 | orchestrator | Saturday 01 November 2025 14:08:16 +0000 (0:00:04.717) 0:07:13.215 ***** 2025-11-01 14:13:02.574147 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:02.574152 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:02.574156 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:02.574161 | orchestrator | 2025-11-01 14:13:02.574166 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-11-01 14:13:02.574171 | orchestrator | Saturday 01 November 2025 14:08:17 +0000 (0:00:00.777) 0:07:13.993 ***** 2025-11-01 14:13:02.574175 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:13:02.574180 | orchestrator | 2025-11-01 14:13:02.574185 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-11-01 14:13:02.574190 | orchestrator | Saturday 01 November 2025 14:08:17 +0000 (0:00:00.626) 0:07:14.620 ***** 2025-11-01 14:13:02.574194 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.574199 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.574204 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.574209 | orchestrator | 2025-11-01 14:13:02.574213 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-11-01 14:13:02.574218 | orchestrator | Saturday 01 November 2025 14:08:18 +0000 (0:00:00.317) 0:07:14.938 ***** 2025-11-01 14:13:02.574227 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:02.574231 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:02.574236 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:02.574241 | orchestrator | 2025-11-01 14:13:02.574246 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-11-01 14:13:02.574251 | orchestrator | Saturday 01 November 2025 14:08:19 +0000 (0:00:01.190) 0:07:16.128 ***** 2025-11-01 14:13:02.574255 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-11-01 14:13:02.574260 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-11-01 14:13:02.574265 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-11-01 14:13:02.574270 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.574274 | orchestrator | 2025-11-01 14:13:02.574279 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-11-01 14:13:02.574284 | orchestrator | Saturday 01 November 2025 14:08:19 +0000 (0:00:00.547) 0:07:16.676 ***** 2025-11-01 14:13:02.574289 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.574293 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.574298 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.574303 | orchestrator | 2025-11-01 14:13:02.574311 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-11-01 14:13:02.574316 | orchestrator | 2025-11-01 14:13:02.574321 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-01 14:13:02.574325 | orchestrator | Saturday 01 November 2025 14:08:20 +0000 (0:00:00.666) 0:07:17.342 ***** 2025-11-01 14:13:02.574346 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:13:02.574352 | orchestrator | 2025-11-01 14:13:02.574357 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-01 14:13:02.574361 | orchestrator | Saturday 01 November 2025 14:08:20 +0000 (0:00:00.454) 0:07:17.797 ***** 2025-11-01 14:13:02.574366 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:13:02.574371 | orchestrator | 2025-11-01 14:13:02.574376 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-01 14:13:02.574381 | orchestrator | Saturday 01 November 2025 14:08:21 +0000 (0:00:00.586) 0:07:18.383 ***** 2025-11-01 14:13:02.574385 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.574390 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.574395 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.574399 | orchestrator | 2025-11-01 14:13:02.574404 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-01 14:13:02.574409 | orchestrator | Saturday 01 November 2025 14:08:21 +0000 (0:00:00.284) 0:07:18.667 ***** 2025-11-01 14:13:02.574413 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.574418 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.574423 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.574428 | orchestrator | 2025-11-01 14:13:02.574432 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-01 14:13:02.574437 | orchestrator | Saturday 01 November 2025 14:08:22 +0000 (0:00:00.611) 0:07:19.279 ***** 2025-11-01 14:13:02.574442 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.574447 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.574451 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.574456 | orchestrator | 2025-11-01 14:13:02.574461 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-01 14:13:02.574465 | orchestrator | Saturday 01 November 2025 14:08:23 +0000 (0:00:00.645) 0:07:19.925 ***** 2025-11-01 14:13:02.574470 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.574475 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.574479 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.574484 | orchestrator | 2025-11-01 14:13:02.574489 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-01 14:13:02.574494 | orchestrator | Saturday 01 November 2025 14:08:23 +0000 (0:00:00.819) 0:07:20.744 ***** 2025-11-01 14:13:02.574513 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.574518 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.574523 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.574528 | orchestrator | 2025-11-01 14:13:02.574533 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-01 14:13:02.574537 | orchestrator | Saturday 01 November 2025 14:08:24 +0000 (0:00:00.300) 0:07:21.045 ***** 2025-11-01 14:13:02.574542 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.574547 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.574552 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.574556 | orchestrator | 2025-11-01 14:13:02.574561 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-01 14:13:02.574566 | orchestrator | Saturday 01 November 2025 14:08:24 +0000 (0:00:00.300) 0:07:21.346 ***** 2025-11-01 14:13:02.574571 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.574575 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.574580 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.574585 | orchestrator | 2025-11-01 14:13:02.574589 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-01 14:13:02.574594 | orchestrator | Saturday 01 November 2025 14:08:24 +0000 (0:00:00.279) 0:07:21.625 ***** 2025-11-01 14:13:02.574599 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.574604 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.574608 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.574613 | orchestrator | 2025-11-01 14:13:02.574618 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-01 14:13:02.574623 | orchestrator | Saturday 01 November 2025 14:08:25 +0000 (0:00:00.826) 0:07:22.451 ***** 2025-11-01 14:13:02.574627 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.574632 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.574637 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.574641 | orchestrator | 2025-11-01 14:13:02.574646 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-01 14:13:02.574651 | orchestrator | Saturday 01 November 2025 14:08:26 +0000 (0:00:00.618) 0:07:23.069 ***** 2025-11-01 14:13:02.574656 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.574660 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.574665 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.574670 | orchestrator | 2025-11-01 14:13:02.574675 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-01 14:13:02.574679 | orchestrator | Saturday 01 November 2025 14:08:26 +0000 (0:00:00.281) 0:07:23.351 ***** 2025-11-01 14:13:02.574684 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.574689 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.574693 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.574698 | orchestrator | 2025-11-01 14:13:02.574703 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-01 14:13:02.574708 | orchestrator | Saturday 01 November 2025 14:08:26 +0000 (0:00:00.258) 0:07:23.609 ***** 2025-11-01 14:13:02.574712 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.574717 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.574722 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.574726 | orchestrator | 2025-11-01 14:13:02.574731 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-01 14:13:02.574736 | orchestrator | Saturday 01 November 2025 14:08:27 +0000 (0:00:00.548) 0:07:24.157 ***** 2025-11-01 14:13:02.574741 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.574745 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.574750 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.574755 | orchestrator | 2025-11-01 14:13:02.574763 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-01 14:13:02.574768 | orchestrator | Saturday 01 November 2025 14:08:27 +0000 (0:00:00.407) 0:07:24.564 ***** 2025-11-01 14:13:02.574772 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.574794 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.574799 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.574804 | orchestrator | 2025-11-01 14:13:02.574809 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-01 14:13:02.574814 | orchestrator | Saturday 01 November 2025 14:08:28 +0000 (0:00:00.372) 0:07:24.937 ***** 2025-11-01 14:13:02.574818 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.574823 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.574828 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.574833 | orchestrator | 2025-11-01 14:13:02.574837 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-01 14:13:02.574842 | orchestrator | Saturday 01 November 2025 14:08:28 +0000 (0:00:00.294) 0:07:25.231 ***** 2025-11-01 14:13:02.574847 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.574852 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.574856 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.574861 | orchestrator | 2025-11-01 14:13:02.574866 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-01 14:13:02.574870 | orchestrator | Saturday 01 November 2025 14:08:29 +0000 (0:00:00.609) 0:07:25.840 ***** 2025-11-01 14:13:02.574875 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.574880 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.574884 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.574889 | orchestrator | 2025-11-01 14:13:02.574894 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-01 14:13:02.574899 | orchestrator | Saturday 01 November 2025 14:08:29 +0000 (0:00:00.312) 0:07:26.153 ***** 2025-11-01 14:13:02.574903 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.574908 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.574913 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.574917 | orchestrator | 2025-11-01 14:13:02.574922 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-01 14:13:02.574927 | orchestrator | Saturday 01 November 2025 14:08:29 +0000 (0:00:00.372) 0:07:26.526 ***** 2025-11-01 14:13:02.574932 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.574936 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.574941 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.574946 | orchestrator | 2025-11-01 14:13:02.574951 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-11-01 14:13:02.574955 | orchestrator | Saturday 01 November 2025 14:08:30 +0000 (0:00:00.798) 0:07:27.324 ***** 2025-11-01 14:13:02.574960 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.574965 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.574969 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.574974 | orchestrator | 2025-11-01 14:13:02.574979 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-11-01 14:13:02.574983 | orchestrator | Saturday 01 November 2025 14:08:30 +0000 (0:00:00.334) 0:07:27.659 ***** 2025-11-01 14:13:02.574988 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-01 14:13:02.574993 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-01 14:13:02.574998 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-01 14:13:02.575002 | orchestrator | 2025-11-01 14:13:02.575007 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-11-01 14:13:02.575012 | orchestrator | Saturday 01 November 2025 14:08:31 +0000 (0:00:00.653) 0:07:28.313 ***** 2025-11-01 14:13:02.575017 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:13:02.575021 | orchestrator | 2025-11-01 14:13:02.575026 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-11-01 14:13:02.575031 | orchestrator | Saturday 01 November 2025 14:08:32 +0000 (0:00:00.576) 0:07:28.889 ***** 2025-11-01 14:13:02.575036 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.575047 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.575051 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.575056 | orchestrator | 2025-11-01 14:13:02.575061 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-11-01 14:13:02.575066 | orchestrator | Saturday 01 November 2025 14:08:32 +0000 (0:00:00.579) 0:07:29.468 ***** 2025-11-01 14:13:02.575070 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.575075 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.575080 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.575084 | orchestrator | 2025-11-01 14:13:02.575089 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-11-01 14:13:02.575094 | orchestrator | Saturday 01 November 2025 14:08:32 +0000 (0:00:00.306) 0:07:29.774 ***** 2025-11-01 14:13:02.575098 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.575103 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.575108 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.575113 | orchestrator | 2025-11-01 14:13:02.575117 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-11-01 14:13:02.575122 | orchestrator | Saturday 01 November 2025 14:08:33 +0000 (0:00:00.753) 0:07:30.528 ***** 2025-11-01 14:13:02.575127 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.575131 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.575136 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.575141 | orchestrator | 2025-11-01 14:13:02.575146 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-11-01 14:13:02.575150 | orchestrator | Saturday 01 November 2025 14:08:34 +0000 (0:00:00.345) 0:07:30.873 ***** 2025-11-01 14:13:02.575155 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-11-01 14:13:02.575160 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-11-01 14:13:02.575168 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-11-01 14:13:02.575172 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-11-01 14:13:02.575179 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-11-01 14:13:02.575184 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-11-01 14:13:02.575189 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-11-01 14:13:02.575194 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-11-01 14:13:02.575198 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-11-01 14:13:02.575203 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-11-01 14:13:02.575208 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-11-01 14:13:02.575213 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-11-01 14:13:02.575217 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-11-01 14:13:02.575222 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-11-01 14:13:02.575227 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-11-01 14:13:02.575231 | orchestrator | 2025-11-01 14:13:02.575236 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-11-01 14:13:02.575241 | orchestrator | Saturday 01 November 2025 14:08:37 +0000 (0:00:03.438) 0:07:34.312 ***** 2025-11-01 14:13:02.575246 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.575250 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.575255 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.575260 | orchestrator | 2025-11-01 14:13:02.575265 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-11-01 14:13:02.575273 | orchestrator | Saturday 01 November 2025 14:08:37 +0000 (0:00:00.367) 0:07:34.680 ***** 2025-11-01 14:13:02.575278 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:13:02.575282 | orchestrator | 2025-11-01 14:13:02.575287 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-11-01 14:13:02.575292 | orchestrator | Saturday 01 November 2025 14:08:38 +0000 (0:00:00.579) 0:07:35.260 ***** 2025-11-01 14:13:02.575296 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-11-01 14:13:02.575301 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-11-01 14:13:02.575306 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-11-01 14:13:02.575311 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-11-01 14:13:02.575315 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-11-01 14:13:02.575320 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-11-01 14:13:02.575325 | orchestrator | 2025-11-01 14:13:02.575329 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-11-01 14:13:02.575334 | orchestrator | Saturday 01 November 2025 14:08:39 +0000 (0:00:01.283) 0:07:36.544 ***** 2025-11-01 14:13:02.575339 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:13:02.575344 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-11-01 14:13:02.575348 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-01 14:13:02.575353 | orchestrator | 2025-11-01 14:13:02.575358 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-11-01 14:13:02.575363 | orchestrator | Saturday 01 November 2025 14:08:42 +0000 (0:00:02.316) 0:07:38.860 ***** 2025-11-01 14:13:02.575367 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-01 14:13:02.575372 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-11-01 14:13:02.575377 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:13:02.575381 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-01 14:13:02.575386 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-11-01 14:13:02.575391 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:13:02.575396 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-01 14:13:02.575400 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-11-01 14:13:02.575405 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:13:02.575410 | orchestrator | 2025-11-01 14:13:02.575415 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-11-01 14:13:02.575419 | orchestrator | Saturday 01 November 2025 14:08:43 +0000 (0:00:01.274) 0:07:40.135 ***** 2025-11-01 14:13:02.575424 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-01 14:13:02.575429 | orchestrator | 2025-11-01 14:13:02.575433 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-11-01 14:13:02.575438 | orchestrator | Saturday 01 November 2025 14:08:45 +0000 (0:00:02.131) 0:07:42.267 ***** 2025-11-01 14:13:02.575443 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:13:02.575448 | orchestrator | 2025-11-01 14:13:02.575452 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-11-01 14:13:02.575457 | orchestrator | Saturday 01 November 2025 14:08:45 +0000 (0:00:00.518) 0:07:42.785 ***** 2025-11-01 14:13:02.575462 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f', 'data_vg': 'ceph-8ee830d1-3d8f-5ecc-a4b4-c1bec6b9910f'}) 2025-11-01 14:13:02.575470 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-47edfe94-e799-500a-9f78-eae255c41273', 'data_vg': 'ceph-47edfe94-e799-500a-9f78-eae255c41273'}) 2025-11-01 14:13:02.575478 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-bf0a4791-ac15-5066-8808-a0a6deeb0cc9', 'data_vg': 'ceph-bf0a4791-ac15-5066-8808-a0a6deeb0cc9'}) 2025-11-01 14:13:02.575486 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7e540012-4fa7-591e-a498-149cbb5b09d9', 'data_vg': 'ceph-7e540012-4fa7-591e-a498-149cbb5b09d9'}) 2025-11-01 14:13:02.575491 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-efff7302-70e8-5bbc-90af-2166d1a25777', 'data_vg': 'ceph-efff7302-70e8-5bbc-90af-2166d1a25777'}) 2025-11-01 14:13:02.575496 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5630d3b4-f241-5aa8-9956-015e1822542e', 'data_vg': 'ceph-5630d3b4-f241-5aa8-9956-015e1822542e'}) 2025-11-01 14:13:02.575529 | orchestrator | 2025-11-01 14:13:02.575534 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-11-01 14:13:02.575539 | orchestrator | Saturday 01 November 2025 14:09:29 +0000 (0:00:43.562) 0:08:26.348 ***** 2025-11-01 14:13:02.575544 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.575549 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.575553 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.575558 | orchestrator | 2025-11-01 14:13:02.575563 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-11-01 14:13:02.575568 | orchestrator | Saturday 01 November 2025 14:09:29 +0000 (0:00:00.412) 0:08:26.760 ***** 2025-11-01 14:13:02.575573 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:13:02.575578 | orchestrator | 2025-11-01 14:13:02.575582 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-11-01 14:13:02.575587 | orchestrator | Saturday 01 November 2025 14:09:30 +0000 (0:00:00.560) 0:08:27.321 ***** 2025-11-01 14:13:02.575592 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.575597 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.575601 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.575606 | orchestrator | 2025-11-01 14:13:02.575611 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-11-01 14:13:02.575616 | orchestrator | Saturday 01 November 2025 14:09:31 +0000 (0:00:01.000) 0:08:28.321 ***** 2025-11-01 14:13:02.575620 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.575625 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.575630 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.575635 | orchestrator | 2025-11-01 14:13:02.575640 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-11-01 14:13:02.575644 | orchestrator | Saturday 01 November 2025 14:09:34 +0000 (0:00:02.660) 0:08:30.982 ***** 2025-11-01 14:13:02.575649 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:13:02.575653 | orchestrator | 2025-11-01 14:13:02.575658 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-11-01 14:13:02.575662 | orchestrator | Saturday 01 November 2025 14:09:34 +0000 (0:00:00.547) 0:08:31.529 ***** 2025-11-01 14:13:02.575667 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:13:02.575671 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:13:02.575676 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:13:02.575680 | orchestrator | 2025-11-01 14:13:02.575685 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-11-01 14:13:02.575689 | orchestrator | Saturday 01 November 2025 14:09:36 +0000 (0:00:01.463) 0:08:32.993 ***** 2025-11-01 14:13:02.575694 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:13:02.575698 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:13:02.575703 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:13:02.575707 | orchestrator | 2025-11-01 14:13:02.575712 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-11-01 14:13:02.575716 | orchestrator | Saturday 01 November 2025 14:09:37 +0000 (0:00:01.143) 0:08:34.137 ***** 2025-11-01 14:13:02.575721 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:13:02.575725 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:13:02.575730 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:13:02.575738 | orchestrator | 2025-11-01 14:13:02.575742 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-11-01 14:13:02.575747 | orchestrator | Saturday 01 November 2025 14:09:39 +0000 (0:00:01.870) 0:08:36.007 ***** 2025-11-01 14:13:02.575751 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.575756 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.575760 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.575765 | orchestrator | 2025-11-01 14:13:02.575769 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-11-01 14:13:02.575774 | orchestrator | Saturday 01 November 2025 14:09:39 +0000 (0:00:00.327) 0:08:36.335 ***** 2025-11-01 14:13:02.575778 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.575783 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.575787 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.575791 | orchestrator | 2025-11-01 14:13:02.575796 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-11-01 14:13:02.575801 | orchestrator | Saturday 01 November 2025 14:09:40 +0000 (0:00:00.735) 0:08:37.070 ***** 2025-11-01 14:13:02.575805 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-11-01 14:13:02.575810 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-11-01 14:13:02.575814 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-11-01 14:13:02.575818 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-11-01 14:13:02.575823 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-11-01 14:13:02.575827 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-11-01 14:13:02.575832 | orchestrator | 2025-11-01 14:13:02.575836 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-11-01 14:13:02.575841 | orchestrator | Saturday 01 November 2025 14:09:41 +0000 (0:00:01.158) 0:08:38.229 ***** 2025-11-01 14:13:02.575848 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-11-01 14:13:02.575853 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-11-01 14:13:02.575857 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-11-01 14:13:02.575862 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-11-01 14:13:02.575869 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-11-01 14:13:02.575874 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-11-01 14:13:02.575878 | orchestrator | 2025-11-01 14:13:02.575883 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-11-01 14:13:02.575887 | orchestrator | Saturday 01 November 2025 14:09:43 +0000 (0:00:02.264) 0:08:40.494 ***** 2025-11-01 14:13:02.575892 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-11-01 14:13:02.575896 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-11-01 14:13:02.575901 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-11-01 14:13:02.575905 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-11-01 14:13:02.575910 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-11-01 14:13:02.575914 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-11-01 14:13:02.575919 | orchestrator | 2025-11-01 14:13:02.575923 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-11-01 14:13:02.575928 | orchestrator | Saturday 01 November 2025 14:09:47 +0000 (0:00:03.735) 0:08:44.229 ***** 2025-11-01 14:13:02.575932 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.575937 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.575941 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-11-01 14:13:02.575946 | orchestrator | 2025-11-01 14:13:02.575950 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-11-01 14:13:02.575955 | orchestrator | Saturday 01 November 2025 14:09:50 +0000 (0:00:02.864) 0:08:47.093 ***** 2025-11-01 14:13:02.575959 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.575964 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.575968 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-11-01 14:13:02.575976 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-11-01 14:13:02.575981 | orchestrator | 2025-11-01 14:13:02.575985 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-11-01 14:13:02.575990 | orchestrator | Saturday 01 November 2025 14:10:02 +0000 (0:00:12.652) 0:08:59.746 ***** 2025-11-01 14:13:02.575994 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.575999 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.576003 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.576008 | orchestrator | 2025-11-01 14:13:02.576012 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-11-01 14:13:02.576017 | orchestrator | Saturday 01 November 2025 14:10:04 +0000 (0:00:01.154) 0:09:00.901 ***** 2025-11-01 14:13:02.576021 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.576026 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.576030 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.576035 | orchestrator | 2025-11-01 14:13:02.576039 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-11-01 14:13:02.576044 | orchestrator | Saturday 01 November 2025 14:10:04 +0000 (0:00:00.353) 0:09:01.254 ***** 2025-11-01 14:13:02.576048 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:13:02.576053 | orchestrator | 2025-11-01 14:13:02.576057 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-11-01 14:13:02.576062 | orchestrator | Saturday 01 November 2025 14:10:04 +0000 (0:00:00.542) 0:09:01.797 ***** 2025-11-01 14:13:02.576066 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 14:13:02.576071 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 14:13:02.576075 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 14:13:02.576080 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.576084 | orchestrator | 2025-11-01 14:13:02.576089 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-11-01 14:13:02.576093 | orchestrator | Saturday 01 November 2025 14:10:06 +0000 (0:00:01.273) 0:09:03.070 ***** 2025-11-01 14:13:02.576098 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.576102 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.576107 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.576111 | orchestrator | 2025-11-01 14:13:02.576116 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-11-01 14:13:02.576120 | orchestrator | Saturday 01 November 2025 14:10:06 +0000 (0:00:00.397) 0:09:03.468 ***** 2025-11-01 14:13:02.576125 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.576129 | orchestrator | 2025-11-01 14:13:02.576134 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-11-01 14:13:02.576138 | orchestrator | Saturday 01 November 2025 14:10:06 +0000 (0:00:00.220) 0:09:03.689 ***** 2025-11-01 14:13:02.576143 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.576147 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.576152 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.576156 | orchestrator | 2025-11-01 14:13:02.576161 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-11-01 14:13:02.576165 | orchestrator | Saturday 01 November 2025 14:10:07 +0000 (0:00:00.385) 0:09:04.074 ***** 2025-11-01 14:13:02.576170 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.576174 | orchestrator | 2025-11-01 14:13:02.576179 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-11-01 14:13:02.576183 | orchestrator | Saturday 01 November 2025 14:10:07 +0000 (0:00:00.291) 0:09:04.365 ***** 2025-11-01 14:13:02.576188 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.576192 | orchestrator | 2025-11-01 14:13:02.576197 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-11-01 14:13:02.576201 | orchestrator | Saturday 01 November 2025 14:10:07 +0000 (0:00:00.264) 0:09:04.630 ***** 2025-11-01 14:13:02.576210 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.576215 | orchestrator | 2025-11-01 14:13:02.576222 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-11-01 14:13:02.576227 | orchestrator | Saturday 01 November 2025 14:10:07 +0000 (0:00:00.126) 0:09:04.757 ***** 2025-11-01 14:13:02.576231 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.576236 | orchestrator | 2025-11-01 14:13:02.576242 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-11-01 14:13:02.576247 | orchestrator | Saturday 01 November 2025 14:10:08 +0000 (0:00:00.259) 0:09:05.016 ***** 2025-11-01 14:13:02.576252 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.576256 | orchestrator | 2025-11-01 14:13:02.576261 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-11-01 14:13:02.576265 | orchestrator | Saturday 01 November 2025 14:10:09 +0000 (0:00:00.934) 0:09:05.951 ***** 2025-11-01 14:13:02.576270 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 14:13:02.576274 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 14:13:02.576279 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 14:13:02.576283 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.576288 | orchestrator | 2025-11-01 14:13:02.576292 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-11-01 14:13:02.576297 | orchestrator | Saturday 01 November 2025 14:10:09 +0000 (0:00:00.438) 0:09:06.389 ***** 2025-11-01 14:13:02.576301 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.576306 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.576310 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.576315 | orchestrator | 2025-11-01 14:13:02.576319 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-11-01 14:13:02.576324 | orchestrator | Saturday 01 November 2025 14:10:09 +0000 (0:00:00.414) 0:09:06.804 ***** 2025-11-01 14:13:02.576328 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.576333 | orchestrator | 2025-11-01 14:13:02.576337 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-11-01 14:13:02.576342 | orchestrator | Saturday 01 November 2025 14:10:10 +0000 (0:00:00.258) 0:09:07.062 ***** 2025-11-01 14:13:02.576346 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.576351 | orchestrator | 2025-11-01 14:13:02.576355 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-11-01 14:13:02.576360 | orchestrator | 2025-11-01 14:13:02.576364 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-01 14:13:02.576369 | orchestrator | Saturday 01 November 2025 14:10:11 +0000 (0:00:01.008) 0:09:08.071 ***** 2025-11-01 14:13:02.576373 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:13:02.576378 | orchestrator | 2025-11-01 14:13:02.576383 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-01 14:13:02.576387 | orchestrator | Saturday 01 November 2025 14:10:12 +0000 (0:00:01.263) 0:09:09.334 ***** 2025-11-01 14:13:02.576392 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:13:02.576396 | orchestrator | 2025-11-01 14:13:02.576401 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-01 14:13:02.576406 | orchestrator | Saturday 01 November 2025 14:10:13 +0000 (0:00:01.091) 0:09:10.426 ***** 2025-11-01 14:13:02.576410 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.576415 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.576419 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.576424 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.576428 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.576433 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.576437 | orchestrator | 2025-11-01 14:13:02.576445 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-01 14:13:02.576449 | orchestrator | Saturday 01 November 2025 14:10:14 +0000 (0:00:01.326) 0:09:11.752 ***** 2025-11-01 14:13:02.576454 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.576458 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.576463 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.576467 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.576472 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.576476 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.576481 | orchestrator | 2025-11-01 14:13:02.576485 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-01 14:13:02.576490 | orchestrator | Saturday 01 November 2025 14:10:15 +0000 (0:00:00.824) 0:09:12.576 ***** 2025-11-01 14:13:02.576494 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.576508 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.576513 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.576517 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.576522 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.576526 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.576531 | orchestrator | 2025-11-01 14:13:02.576535 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-01 14:13:02.576540 | orchestrator | Saturday 01 November 2025 14:10:16 +0000 (0:00:01.082) 0:09:13.659 ***** 2025-11-01 14:13:02.576544 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.576549 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.576553 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.576558 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.576562 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.576567 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.576571 | orchestrator | 2025-11-01 14:13:02.576576 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-01 14:13:02.576580 | orchestrator | Saturday 01 November 2025 14:10:17 +0000 (0:00:00.759) 0:09:14.419 ***** 2025-11-01 14:13:02.576585 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.576590 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.576594 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.576598 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.576603 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.576608 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.576612 | orchestrator | 2025-11-01 14:13:02.576620 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-01 14:13:02.576624 | orchestrator | Saturday 01 November 2025 14:10:18 +0000 (0:00:01.342) 0:09:15.761 ***** 2025-11-01 14:13:02.576629 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.576633 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.576641 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.576645 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.576650 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.576654 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.576659 | orchestrator | 2025-11-01 14:13:02.576663 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-01 14:13:02.576668 | orchestrator | Saturday 01 November 2025 14:10:19 +0000 (0:00:00.660) 0:09:16.422 ***** 2025-11-01 14:13:02.576672 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.576677 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.576681 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.576686 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.576690 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.576695 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.576699 | orchestrator | 2025-11-01 14:13:02.576704 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-01 14:13:02.576708 | orchestrator | Saturday 01 November 2025 14:10:20 +0000 (0:00:00.960) 0:09:17.382 ***** 2025-11-01 14:13:02.576713 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.576721 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.576725 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.576730 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.576734 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.576739 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.576743 | orchestrator | 2025-11-01 14:13:02.576748 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-01 14:13:02.576752 | orchestrator | Saturday 01 November 2025 14:10:21 +0000 (0:00:01.078) 0:09:18.461 ***** 2025-11-01 14:13:02.576757 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.576761 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.576766 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.576770 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.576775 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.576779 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.576784 | orchestrator | 2025-11-01 14:13:02.576788 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-01 14:13:02.576793 | orchestrator | Saturday 01 November 2025 14:10:23 +0000 (0:00:01.571) 0:09:20.033 ***** 2025-11-01 14:13:02.576797 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.576802 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.576806 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.576811 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.576815 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.576820 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.576824 | orchestrator | 2025-11-01 14:13:02.576829 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-01 14:13:02.576833 | orchestrator | Saturday 01 November 2025 14:10:23 +0000 (0:00:00.615) 0:09:20.648 ***** 2025-11-01 14:13:02.576838 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.576842 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.576847 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.576851 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.576856 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.576860 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.576865 | orchestrator | 2025-11-01 14:13:02.576869 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-01 14:13:02.576874 | orchestrator | Saturday 01 November 2025 14:10:24 +0000 (0:00:01.004) 0:09:21.653 ***** 2025-11-01 14:13:02.576878 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.576883 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.576887 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.576892 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.576896 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.576901 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.576905 | orchestrator | 2025-11-01 14:13:02.576910 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-01 14:13:02.576914 | orchestrator | Saturday 01 November 2025 14:10:25 +0000 (0:00:00.664) 0:09:22.318 ***** 2025-11-01 14:13:02.576919 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.576923 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.576928 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.576932 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.576937 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.576941 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.576946 | orchestrator | 2025-11-01 14:13:02.576950 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-01 14:13:02.576955 | orchestrator | Saturday 01 November 2025 14:10:26 +0000 (0:00:00.971) 0:09:23.290 ***** 2025-11-01 14:13:02.576959 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.576964 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.576968 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.576973 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.576977 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.576982 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.576990 | orchestrator | 2025-11-01 14:13:02.576994 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-01 14:13:02.576999 | orchestrator | Saturday 01 November 2025 14:10:27 +0000 (0:00:00.691) 0:09:23.982 ***** 2025-11-01 14:13:02.577003 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.577008 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.577012 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.577017 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.577021 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.577026 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.577030 | orchestrator | 2025-11-01 14:13:02.577035 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-01 14:13:02.577039 | orchestrator | Saturday 01 November 2025 14:10:28 +0000 (0:00:00.963) 0:09:24.945 ***** 2025-11-01 14:13:02.577044 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.577048 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.577053 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.577057 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:02.577062 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:02.577069 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:02.577074 | orchestrator | 2025-11-01 14:13:02.577078 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-01 14:13:02.577083 | orchestrator | Saturday 01 November 2025 14:10:28 +0000 (0:00:00.643) 0:09:25.588 ***** 2025-11-01 14:13:02.577089 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.577094 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.577099 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.577103 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.577108 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.577112 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.577117 | orchestrator | 2025-11-01 14:13:02.577121 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-01 14:13:02.577126 | orchestrator | Saturday 01 November 2025 14:10:29 +0000 (0:00:00.941) 0:09:26.529 ***** 2025-11-01 14:13:02.577130 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.577135 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.577139 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.577144 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.577148 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.577153 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.577157 | orchestrator | 2025-11-01 14:13:02.577162 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-01 14:13:02.577166 | orchestrator | Saturday 01 November 2025 14:10:30 +0000 (0:00:00.710) 0:09:27.240 ***** 2025-11-01 14:13:02.577171 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.577175 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.577180 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.577184 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.577189 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.577193 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.577197 | orchestrator | 2025-11-01 14:13:02.577202 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-11-01 14:13:02.577207 | orchestrator | Saturday 01 November 2025 14:10:31 +0000 (0:00:01.382) 0:09:28.623 ***** 2025-11-01 14:13:02.577211 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-01 14:13:02.577216 | orchestrator | 2025-11-01 14:13:02.577220 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-11-01 14:13:02.577225 | orchestrator | Saturday 01 November 2025 14:10:35 +0000 (0:00:04.156) 0:09:32.779 ***** 2025-11-01 14:13:02.577229 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-01 14:13:02.577234 | orchestrator | 2025-11-01 14:13:02.577238 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-11-01 14:13:02.577243 | orchestrator | Saturday 01 November 2025 14:10:38 +0000 (0:00:02.161) 0:09:34.941 ***** 2025-11-01 14:13:02.577250 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:13:02.577255 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:13:02.577259 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:13:02.577264 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.577268 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:02.577273 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:02.577277 | orchestrator | 2025-11-01 14:13:02.577282 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-11-01 14:13:02.577287 | orchestrator | Saturday 01 November 2025 14:10:39 +0000 (0:00:01.842) 0:09:36.783 ***** 2025-11-01 14:13:02.577291 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:13:02.577296 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:13:02.577300 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:13:02.577305 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:02.577309 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:02.577314 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:02.577318 | orchestrator | 2025-11-01 14:13:02.577323 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-11-01 14:13:02.577327 | orchestrator | Saturday 01 November 2025 14:10:41 +0000 (0:00:01.060) 0:09:37.843 ***** 2025-11-01 14:13:02.577332 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:13:02.577336 | orchestrator | 2025-11-01 14:13:02.577341 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-11-01 14:13:02.577345 | orchestrator | Saturday 01 November 2025 14:10:42 +0000 (0:00:01.306) 0:09:39.149 ***** 2025-11-01 14:13:02.577350 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:13:02.577354 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:13:02.577359 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:13:02.577363 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:02.577368 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:02.577372 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:02.577377 | orchestrator | 2025-11-01 14:13:02.577381 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-11-01 14:13:02.577386 | orchestrator | Saturday 01 November 2025 14:10:44 +0000 (0:00:01.913) 0:09:41.062 ***** 2025-11-01 14:13:02.577390 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:13:02.577395 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:13:02.577399 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:13:02.577404 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:02.577408 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:02.577413 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:02.577417 | orchestrator | 2025-11-01 14:13:02.577422 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-11-01 14:13:02.577426 | orchestrator | Saturday 01 November 2025 14:10:47 +0000 (0:00:03.473) 0:09:44.536 ***** 2025-11-01 14:13:02.577431 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:13:02.577436 | orchestrator | 2025-11-01 14:13:02.577440 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-11-01 14:13:02.577445 | orchestrator | Saturday 01 November 2025 14:10:49 +0000 (0:00:01.478) 0:09:46.014 ***** 2025-11-01 14:13:02.577449 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.577454 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.577458 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.577463 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.577467 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.577474 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.577479 | orchestrator | 2025-11-01 14:13:02.577483 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-11-01 14:13:02.577488 | orchestrator | Saturday 01 November 2025 14:10:50 +0000 (0:00:00.941) 0:09:46.956 ***** 2025-11-01 14:13:02.577509 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:13:02.577514 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:13:02.577518 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:13:02.577523 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:02.577527 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:02.577532 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:02.577536 | orchestrator | 2025-11-01 14:13:02.577541 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-11-01 14:13:02.577545 | orchestrator | Saturday 01 November 2025 14:10:52 +0000 (0:00:02.484) 0:09:49.440 ***** 2025-11-01 14:13:02.577550 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.577555 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.577559 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.577563 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:02.577568 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:02.577572 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:02.577577 | orchestrator | 2025-11-01 14:13:02.577581 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-11-01 14:13:02.577586 | orchestrator | 2025-11-01 14:13:02.577590 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-01 14:13:02.577595 | orchestrator | Saturday 01 November 2025 14:10:54 +0000 (0:00:01.430) 0:09:50.870 ***** 2025-11-01 14:13:02.577600 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:13:02.577604 | orchestrator | 2025-11-01 14:13:02.577609 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-01 14:13:02.577613 | orchestrator | Saturday 01 November 2025 14:10:54 +0000 (0:00:00.551) 0:09:51.422 ***** 2025-11-01 14:13:02.577618 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:13:02.577622 | orchestrator | 2025-11-01 14:13:02.577627 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-01 14:13:02.577631 | orchestrator | Saturday 01 November 2025 14:10:55 +0000 (0:00:00.951) 0:09:52.373 ***** 2025-11-01 14:13:02.577636 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.577640 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.577645 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.577649 | orchestrator | 2025-11-01 14:13:02.577654 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-01 14:13:02.577659 | orchestrator | Saturday 01 November 2025 14:10:55 +0000 (0:00:00.362) 0:09:52.735 ***** 2025-11-01 14:13:02.577663 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.577668 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.577672 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.577677 | orchestrator | 2025-11-01 14:13:02.577681 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-01 14:13:02.577686 | orchestrator | Saturday 01 November 2025 14:10:56 +0000 (0:00:00.751) 0:09:53.486 ***** 2025-11-01 14:13:02.577690 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.577695 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.577699 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.577704 | orchestrator | 2025-11-01 14:13:02.577708 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-01 14:13:02.577713 | orchestrator | Saturday 01 November 2025 14:10:57 +0000 (0:00:01.115) 0:09:54.602 ***** 2025-11-01 14:13:02.577717 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.577722 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.577726 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.577731 | orchestrator | 2025-11-01 14:13:02.577735 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-01 14:13:02.577740 | orchestrator | Saturday 01 November 2025 14:10:58 +0000 (0:00:00.824) 0:09:55.427 ***** 2025-11-01 14:13:02.577744 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.577753 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.577757 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.577762 | orchestrator | 2025-11-01 14:13:02.577766 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-01 14:13:02.577771 | orchestrator | Saturday 01 November 2025 14:10:58 +0000 (0:00:00.318) 0:09:55.746 ***** 2025-11-01 14:13:02.577776 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.577780 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.577785 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.577789 | orchestrator | 2025-11-01 14:13:02.577794 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-01 14:13:02.577798 | orchestrator | Saturday 01 November 2025 14:10:59 +0000 (0:00:00.316) 0:09:56.063 ***** 2025-11-01 14:13:02.577803 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.577807 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.577812 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.577816 | orchestrator | 2025-11-01 14:13:02.577821 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-01 14:13:02.577825 | orchestrator | Saturday 01 November 2025 14:11:00 +0000 (0:00:00.886) 0:09:56.949 ***** 2025-11-01 14:13:02.577830 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.577834 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.577839 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.577843 | orchestrator | 2025-11-01 14:13:02.577848 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-01 14:13:02.577852 | orchestrator | Saturday 01 November 2025 14:11:00 +0000 (0:00:00.760) 0:09:57.710 ***** 2025-11-01 14:13:02.577857 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.577861 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.577866 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.577870 | orchestrator | 2025-11-01 14:13:02.577875 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-01 14:13:02.577879 | orchestrator | Saturday 01 November 2025 14:11:01 +0000 (0:00:00.954) 0:09:58.665 ***** 2025-11-01 14:13:02.577884 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.577888 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.577896 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.577900 | orchestrator | 2025-11-01 14:13:02.577905 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-01 14:13:02.577909 | orchestrator | Saturday 01 November 2025 14:11:02 +0000 (0:00:00.315) 0:09:58.980 ***** 2025-11-01 14:13:02.577916 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.577920 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.577925 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.577929 | orchestrator | 2025-11-01 14:13:02.577934 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-01 14:13:02.577938 | orchestrator | Saturday 01 November 2025 14:11:02 +0000 (0:00:00.606) 0:09:59.586 ***** 2025-11-01 14:13:02.577943 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.577947 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.577952 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.577956 | orchestrator | 2025-11-01 14:13:02.577961 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-01 14:13:02.577965 | orchestrator | Saturday 01 November 2025 14:11:03 +0000 (0:00:00.377) 0:09:59.964 ***** 2025-11-01 14:13:02.577970 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.577975 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.577979 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.577984 | orchestrator | 2025-11-01 14:13:02.577988 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-01 14:13:02.577993 | orchestrator | Saturday 01 November 2025 14:11:03 +0000 (0:00:00.357) 0:10:00.322 ***** 2025-11-01 14:13:02.577997 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.578002 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.578006 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.578035 | orchestrator | 2025-11-01 14:13:02.578041 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-01 14:13:02.578046 | orchestrator | Saturday 01 November 2025 14:11:03 +0000 (0:00:00.470) 0:10:00.792 ***** 2025-11-01 14:13:02.578050 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.578055 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.578059 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.578064 | orchestrator | 2025-11-01 14:13:02.578068 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-01 14:13:02.578073 | orchestrator | Saturday 01 November 2025 14:11:04 +0000 (0:00:00.802) 0:10:01.594 ***** 2025-11-01 14:13:02.578077 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.578082 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.578086 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.578091 | orchestrator | 2025-11-01 14:13:02.578096 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-01 14:13:02.578100 | orchestrator | Saturday 01 November 2025 14:11:05 +0000 (0:00:00.380) 0:10:01.974 ***** 2025-11-01 14:13:02.578105 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.578109 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.578114 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.578118 | orchestrator | 2025-11-01 14:13:02.578123 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-01 14:13:02.578127 | orchestrator | Saturday 01 November 2025 14:11:05 +0000 (0:00:00.421) 0:10:02.396 ***** 2025-11-01 14:13:02.578132 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.578136 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.578141 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.578146 | orchestrator | 2025-11-01 14:13:02.578150 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-01 14:13:02.578155 | orchestrator | Saturday 01 November 2025 14:11:05 +0000 (0:00:00.331) 0:10:02.728 ***** 2025-11-01 14:13:02.578159 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.578164 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.578168 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.578173 | orchestrator | 2025-11-01 14:13:02.578177 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-11-01 14:13:02.578182 | orchestrator | Saturday 01 November 2025 14:11:06 +0000 (0:00:00.944) 0:10:03.673 ***** 2025-11-01 14:13:02.578186 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.578191 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.578196 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-11-01 14:13:02.578200 | orchestrator | 2025-11-01 14:13:02.578205 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-11-01 14:13:02.578209 | orchestrator | Saturday 01 November 2025 14:11:07 +0000 (0:00:00.543) 0:10:04.216 ***** 2025-11-01 14:13:02.578214 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-01 14:13:02.578218 | orchestrator | 2025-11-01 14:13:02.578223 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-11-01 14:13:02.578227 | orchestrator | Saturday 01 November 2025 14:11:09 +0000 (0:00:02.332) 0:10:06.549 ***** 2025-11-01 14:13:02.578233 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-11-01 14:13:02.578238 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.578243 | orchestrator | 2025-11-01 14:13:02.578248 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-11-01 14:13:02.578252 | orchestrator | Saturday 01 November 2025 14:11:09 +0000 (0:00:00.205) 0:10:06.755 ***** 2025-11-01 14:13:02.578257 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-01 14:13:02.578272 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-01 14:13:02.578277 | orchestrator | 2025-11-01 14:13:02.578282 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-11-01 14:13:02.578289 | orchestrator | Saturday 01 November 2025 14:11:18 +0000 (0:00:08.414) 0:10:15.169 ***** 2025-11-01 14:13:02.578294 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-01 14:13:02.578299 | orchestrator | 2025-11-01 14:13:02.578303 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-11-01 14:13:02.578308 | orchestrator | Saturday 01 November 2025 14:11:21 +0000 (0:00:03.421) 0:10:18.591 ***** 2025-11-01 14:13:02.578312 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:13:02.578317 | orchestrator | 2025-11-01 14:13:02.578321 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-11-01 14:13:02.578326 | orchestrator | Saturday 01 November 2025 14:11:22 +0000 (0:00:00.538) 0:10:19.129 ***** 2025-11-01 14:13:02.578331 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-11-01 14:13:02.578335 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-11-01 14:13:02.578340 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-11-01 14:13:02.578344 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-11-01 14:13:02.578348 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-11-01 14:13:02.578353 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-11-01 14:13:02.578357 | orchestrator | 2025-11-01 14:13:02.578362 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-11-01 14:13:02.578366 | orchestrator | Saturday 01 November 2025 14:11:23 +0000 (0:00:00.923) 0:10:20.053 ***** 2025-11-01 14:13:02.578371 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:13:02.578375 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-11-01 14:13:02.578380 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-01 14:13:02.578384 | orchestrator | 2025-11-01 14:13:02.578389 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-11-01 14:13:02.578394 | orchestrator | Saturday 01 November 2025 14:11:25 +0000 (0:00:02.680) 0:10:22.733 ***** 2025-11-01 14:13:02.578398 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-01 14:13:02.578403 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-11-01 14:13:02.578407 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:13:02.578412 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-01 14:13:02.578416 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-11-01 14:13:02.578421 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:13:02.578425 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-01 14:13:02.578430 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-11-01 14:13:02.578434 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:13:02.578439 | orchestrator | 2025-11-01 14:13:02.578443 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-11-01 14:13:02.578448 | orchestrator | Saturday 01 November 2025 14:11:27 +0000 (0:00:01.564) 0:10:24.298 ***** 2025-11-01 14:13:02.578452 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:13:02.578457 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:13:02.578461 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:13:02.578466 | orchestrator | 2025-11-01 14:13:02.578470 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-11-01 14:13:02.578480 | orchestrator | Saturday 01 November 2025 14:11:30 +0000 (0:00:02.811) 0:10:27.109 ***** 2025-11-01 14:13:02.578484 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.578489 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.578493 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.578498 | orchestrator | 2025-11-01 14:13:02.578526 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-11-01 14:13:02.578531 | orchestrator | Saturday 01 November 2025 14:11:30 +0000 (0:00:00.366) 0:10:27.475 ***** 2025-11-01 14:13:02.578536 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:13:02.578540 | orchestrator | 2025-11-01 14:13:02.578545 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-11-01 14:13:02.578549 | orchestrator | Saturday 01 November 2025 14:11:31 +0000 (0:00:00.829) 0:10:28.305 ***** 2025-11-01 14:13:02.578554 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:13:02.578558 | orchestrator | 2025-11-01 14:13:02.578563 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-11-01 14:13:02.578567 | orchestrator | Saturday 01 November 2025 14:11:32 +0000 (0:00:00.561) 0:10:28.867 ***** 2025-11-01 14:13:02.578572 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:13:02.578576 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:13:02.578581 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:13:02.578585 | orchestrator | 2025-11-01 14:13:02.578590 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-11-01 14:13:02.578594 | orchestrator | Saturday 01 November 2025 14:11:33 +0000 (0:00:01.268) 0:10:30.136 ***** 2025-11-01 14:13:02.578598 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:13:02.578603 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:13:02.578607 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:13:02.578612 | orchestrator | 2025-11-01 14:13:02.578616 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-11-01 14:13:02.578621 | orchestrator | Saturday 01 November 2025 14:11:34 +0000 (0:00:01.534) 0:10:31.671 ***** 2025-11-01 14:13:02.578625 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:13:02.578630 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:13:02.578634 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:13:02.578642 | orchestrator | 2025-11-01 14:13:02.578646 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-11-01 14:13:02.578651 | orchestrator | Saturday 01 November 2025 14:11:36 +0000 (0:00:01.937) 0:10:33.609 ***** 2025-11-01 14:13:02.578655 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:13:02.578662 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:13:02.578667 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:13:02.578672 | orchestrator | 2025-11-01 14:13:02.578676 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-11-01 14:13:02.578681 | orchestrator | Saturday 01 November 2025 14:11:39 +0000 (0:00:02.925) 0:10:36.534 ***** 2025-11-01 14:13:02.578685 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.578690 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.578694 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.578699 | orchestrator | 2025-11-01 14:13:02.578703 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-11-01 14:13:02.578708 | orchestrator | Saturday 01 November 2025 14:11:41 +0000 (0:00:01.535) 0:10:38.070 ***** 2025-11-01 14:13:02.578712 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:13:02.578717 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:13:02.578721 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:13:02.578726 | orchestrator | 2025-11-01 14:13:02.578730 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-11-01 14:13:02.578735 | orchestrator | Saturday 01 November 2025 14:11:41 +0000 (0:00:00.677) 0:10:38.747 ***** 2025-11-01 14:13:02.578739 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:13:02.578748 | orchestrator | 2025-11-01 14:13:02.578752 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-11-01 14:13:02.578757 | orchestrator | Saturday 01 November 2025 14:11:42 +0000 (0:00:00.806) 0:10:39.553 ***** 2025-11-01 14:13:02.578761 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.578766 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.578770 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.578775 | orchestrator | 2025-11-01 14:13:02.578779 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-11-01 14:13:02.578784 | orchestrator | Saturday 01 November 2025 14:11:43 +0000 (0:00:00.387) 0:10:39.941 ***** 2025-11-01 14:13:02.578788 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:13:02.578793 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:13:02.578797 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:13:02.578802 | orchestrator | 2025-11-01 14:13:02.578806 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-11-01 14:13:02.578811 | orchestrator | Saturday 01 November 2025 14:11:44 +0000 (0:00:01.264) 0:10:41.205 ***** 2025-11-01 14:13:02.578815 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 14:13:02.578820 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 14:13:02.578824 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 14:13:02.578829 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.578834 | orchestrator | 2025-11-01 14:13:02.578838 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-11-01 14:13:02.578843 | orchestrator | Saturday 01 November 2025 14:11:45 +0000 (0:00:01.001) 0:10:42.207 ***** 2025-11-01 14:13:02.578847 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.578852 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.578856 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.578861 | orchestrator | 2025-11-01 14:13:02.578865 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-11-01 14:13:02.578870 | orchestrator | 2025-11-01 14:13:02.578874 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-01 14:13:02.578879 | orchestrator | Saturday 01 November 2025 14:11:46 +0000 (0:00:00.919) 0:10:43.127 ***** 2025-11-01 14:13:02.578883 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:13:02.578888 | orchestrator | 2025-11-01 14:13:02.578893 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-01 14:13:02.578897 | orchestrator | Saturday 01 November 2025 14:11:46 +0000 (0:00:00.501) 0:10:43.629 ***** 2025-11-01 14:13:02.578902 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:13:02.578906 | orchestrator | 2025-11-01 14:13:02.578911 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-01 14:13:02.578915 | orchestrator | Saturday 01 November 2025 14:11:47 +0000 (0:00:00.753) 0:10:44.382 ***** 2025-11-01 14:13:02.578920 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.578924 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.578929 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.578933 | orchestrator | 2025-11-01 14:13:02.578938 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-01 14:13:02.578942 | orchestrator | Saturday 01 November 2025 14:11:47 +0000 (0:00:00.349) 0:10:44.732 ***** 2025-11-01 14:13:02.578947 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.578951 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.578956 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.578960 | orchestrator | 2025-11-01 14:13:02.578965 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-01 14:13:02.578969 | orchestrator | Saturday 01 November 2025 14:11:48 +0000 (0:00:00.740) 0:10:45.473 ***** 2025-11-01 14:13:02.578977 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.578982 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.578986 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.578991 | orchestrator | 2025-11-01 14:13:02.578995 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-01 14:13:02.579000 | orchestrator | Saturday 01 November 2025 14:11:49 +0000 (0:00:00.998) 0:10:46.472 ***** 2025-11-01 14:13:02.579004 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.579009 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.579013 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.579018 | orchestrator | 2025-11-01 14:13:02.579022 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-01 14:13:02.579030 | orchestrator | Saturday 01 November 2025 14:11:50 +0000 (0:00:00.756) 0:10:47.229 ***** 2025-11-01 14:13:02.579035 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.579039 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.579043 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.579047 | orchestrator | 2025-11-01 14:13:02.579053 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-01 14:13:02.579057 | orchestrator | Saturday 01 November 2025 14:11:50 +0000 (0:00:00.351) 0:10:47.580 ***** 2025-11-01 14:13:02.579061 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.579066 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.579070 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.579074 | orchestrator | 2025-11-01 14:13:02.579078 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-01 14:13:02.579082 | orchestrator | Saturday 01 November 2025 14:11:51 +0000 (0:00:00.353) 0:10:47.934 ***** 2025-11-01 14:13:02.579086 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.579090 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.579094 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.579098 | orchestrator | 2025-11-01 14:13:02.579102 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-01 14:13:02.579106 | orchestrator | Saturday 01 November 2025 14:11:51 +0000 (0:00:00.628) 0:10:48.562 ***** 2025-11-01 14:13:02.579110 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.579114 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.579119 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.579123 | orchestrator | 2025-11-01 14:13:02.579127 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-01 14:13:02.579131 | orchestrator | Saturday 01 November 2025 14:11:52 +0000 (0:00:00.754) 0:10:49.317 ***** 2025-11-01 14:13:02.579135 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.579139 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.579143 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.579147 | orchestrator | 2025-11-01 14:13:02.579151 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-01 14:13:02.579155 | orchestrator | Saturday 01 November 2025 14:11:53 +0000 (0:00:00.755) 0:10:50.073 ***** 2025-11-01 14:13:02.579160 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.579164 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.579168 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.579172 | orchestrator | 2025-11-01 14:13:02.579176 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-01 14:13:02.579180 | orchestrator | Saturday 01 November 2025 14:11:53 +0000 (0:00:00.367) 0:10:50.440 ***** 2025-11-01 14:13:02.579184 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.579188 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.579192 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.579196 | orchestrator | 2025-11-01 14:13:02.579200 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-01 14:13:02.579205 | orchestrator | Saturday 01 November 2025 14:11:54 +0000 (0:00:00.607) 0:10:51.047 ***** 2025-11-01 14:13:02.579209 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.579216 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.579220 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.579224 | orchestrator | 2025-11-01 14:13:02.579228 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-01 14:13:02.579233 | orchestrator | Saturday 01 November 2025 14:11:54 +0000 (0:00:00.359) 0:10:51.407 ***** 2025-11-01 14:13:02.579237 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.579241 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.579245 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.579249 | orchestrator | 2025-11-01 14:13:02.579253 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-01 14:13:02.579257 | orchestrator | Saturday 01 November 2025 14:11:54 +0000 (0:00:00.351) 0:10:51.759 ***** 2025-11-01 14:13:02.579261 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.579265 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.579269 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.579273 | orchestrator | 2025-11-01 14:13:02.579277 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-01 14:13:02.579282 | orchestrator | Saturday 01 November 2025 14:11:55 +0000 (0:00:00.380) 0:10:52.140 ***** 2025-11-01 14:13:02.579286 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.579290 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.579294 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.579298 | orchestrator | 2025-11-01 14:13:02.579302 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-01 14:13:02.579306 | orchestrator | Saturday 01 November 2025 14:11:55 +0000 (0:00:00.341) 0:10:52.482 ***** 2025-11-01 14:13:02.579310 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.579315 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.579319 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.579323 | orchestrator | 2025-11-01 14:13:02.579327 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-01 14:13:02.579331 | orchestrator | Saturday 01 November 2025 14:11:56 +0000 (0:00:00.622) 0:10:53.104 ***** 2025-11-01 14:13:02.579335 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.579339 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.579343 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.579347 | orchestrator | 2025-11-01 14:13:02.579351 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-01 14:13:02.579355 | orchestrator | Saturday 01 November 2025 14:11:56 +0000 (0:00:00.357) 0:10:53.462 ***** 2025-11-01 14:13:02.579359 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.579364 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.579368 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.579372 | orchestrator | 2025-11-01 14:13:02.579376 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-01 14:13:02.579380 | orchestrator | Saturday 01 November 2025 14:11:56 +0000 (0:00:00.335) 0:10:53.798 ***** 2025-11-01 14:13:02.579384 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.579388 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.579392 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.579396 | orchestrator | 2025-11-01 14:13:02.579400 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-11-01 14:13:02.579407 | orchestrator | Saturday 01 November 2025 14:11:57 +0000 (0:00:00.907) 0:10:54.705 ***** 2025-11-01 14:13:02.579411 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:13:02.579415 | orchestrator | 2025-11-01 14:13:02.579419 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-11-01 14:13:02.579425 | orchestrator | Saturday 01 November 2025 14:11:58 +0000 (0:00:00.587) 0:10:55.293 ***** 2025-11-01 14:13:02.579430 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:13:02.579434 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-11-01 14:13:02.579438 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-01 14:13:02.579445 | orchestrator | 2025-11-01 14:13:02.579449 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-11-01 14:13:02.579453 | orchestrator | Saturday 01 November 2025 14:12:00 +0000 (0:00:02.356) 0:10:57.650 ***** 2025-11-01 14:13:02.579457 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-01 14:13:02.579461 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-01 14:13:02.579465 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-11-01 14:13:02.579469 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-11-01 14:13:02.579474 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:13:02.579478 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:13:02.579482 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-01 14:13:02.579486 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-11-01 14:13:02.579490 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:13:02.579494 | orchestrator | 2025-11-01 14:13:02.579498 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-11-01 14:13:02.579512 | orchestrator | Saturday 01 November 2025 14:12:02 +0000 (0:00:01.538) 0:10:59.189 ***** 2025-11-01 14:13:02.579516 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.579521 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.579525 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.579529 | orchestrator | 2025-11-01 14:13:02.579533 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-11-01 14:13:02.579537 | orchestrator | Saturday 01 November 2025 14:12:02 +0000 (0:00:00.422) 0:10:59.611 ***** 2025-11-01 14:13:02.579541 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:13:02.579545 | orchestrator | 2025-11-01 14:13:02.579549 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-11-01 14:13:02.579553 | orchestrator | Saturday 01 November 2025 14:12:03 +0000 (0:00:00.565) 0:11:00.176 ***** 2025-11-01 14:13:02.579557 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-11-01 14:13:02.579561 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-11-01 14:13:02.579566 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-11-01 14:13:02.579570 | orchestrator | 2025-11-01 14:13:02.579574 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-11-01 14:13:02.579578 | orchestrator | Saturday 01 November 2025 14:12:04 +0000 (0:00:01.391) 0:11:01.568 ***** 2025-11-01 14:13:02.579582 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:13:02.579586 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-11-01 14:13:02.579590 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:13:02.579594 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-11-01 14:13:02.579599 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:13:02.579603 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-11-01 14:13:02.579607 | orchestrator | 2025-11-01 14:13:02.579611 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-11-01 14:13:02.579615 | orchestrator | Saturday 01 November 2025 14:12:09 +0000 (0:00:05.007) 0:11:06.576 ***** 2025-11-01 14:13:02.579619 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:13:02.579626 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-01 14:13:02.579630 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:13:02.579634 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-01 14:13:02.579638 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:13:02.579642 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-01 14:13:02.579646 | orchestrator | 2025-11-01 14:13:02.579651 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-11-01 14:13:02.579655 | orchestrator | Saturday 01 November 2025 14:12:12 +0000 (0:00:02.684) 0:11:09.260 ***** 2025-11-01 14:13:02.579659 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-01 14:13:02.579663 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:13:02.579667 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-01 14:13:02.579673 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-01 14:13:02.579678 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:13:02.579682 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:13:02.579686 | orchestrator | 2025-11-01 14:13:02.579690 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-11-01 14:13:02.579696 | orchestrator | Saturday 01 November 2025 14:12:13 +0000 (0:00:01.286) 0:11:10.546 ***** 2025-11-01 14:13:02.579701 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-11-01 14:13:02.579705 | orchestrator | 2025-11-01 14:13:02.579709 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-11-01 14:13:02.579713 | orchestrator | Saturday 01 November 2025 14:12:14 +0000 (0:00:00.317) 0:11:10.864 ***** 2025-11-01 14:13:02.579717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-01 14:13:02.579721 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-01 14:13:02.579725 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-01 14:13:02.579729 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-01 14:13:02.579733 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-01 14:13:02.579737 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.579742 | orchestrator | 2025-11-01 14:13:02.579746 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-11-01 14:13:02.579750 | orchestrator | Saturday 01 November 2025 14:12:15 +0000 (0:00:01.339) 0:11:12.204 ***** 2025-11-01 14:13:02.579754 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-01 14:13:02.579758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-01 14:13:02.579762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-01 14:13:02.579766 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-01 14:13:02.579770 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-01 14:13:02.579774 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.579778 | orchestrator | 2025-11-01 14:13:02.579782 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-11-01 14:13:02.579787 | orchestrator | Saturday 01 November 2025 14:12:16 +0000 (0:00:00.652) 0:11:12.856 ***** 2025-11-01 14:13:02.579794 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-11-01 14:13:02.579798 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-11-01 14:13:02.579802 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-11-01 14:13:02.579806 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-11-01 14:13:02.579810 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-11-01 14:13:02.579814 | orchestrator | 2025-11-01 14:13:02.579818 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-11-01 14:13:02.579823 | orchestrator | Saturday 01 November 2025 14:12:47 +0000 (0:00:31.898) 0:11:44.755 ***** 2025-11-01 14:13:02.579827 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.579831 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.579835 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.579839 | orchestrator | 2025-11-01 14:13:02.579843 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-11-01 14:13:02.579847 | orchestrator | Saturday 01 November 2025 14:12:48 +0000 (0:00:00.357) 0:11:45.112 ***** 2025-11-01 14:13:02.579851 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.579855 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.579859 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.579863 | orchestrator | 2025-11-01 14:13:02.579867 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-11-01 14:13:02.579871 | orchestrator | Saturday 01 November 2025 14:12:48 +0000 (0:00:00.325) 0:11:45.437 ***** 2025-11-01 14:13:02.579875 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:13:02.579880 | orchestrator | 2025-11-01 14:13:02.579884 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-11-01 14:13:02.579890 | orchestrator | Saturday 01 November 2025 14:12:49 +0000 (0:00:00.861) 0:11:46.299 ***** 2025-11-01 14:13:02.579894 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:13:02.579898 | orchestrator | 2025-11-01 14:13:02.579904 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-11-01 14:13:02.579909 | orchestrator | Saturday 01 November 2025 14:12:50 +0000 (0:00:00.580) 0:11:46.880 ***** 2025-11-01 14:13:02.579913 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:13:02.579917 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:13:02.579921 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:13:02.579925 | orchestrator | 2025-11-01 14:13:02.579929 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-11-01 14:13:02.579933 | orchestrator | Saturday 01 November 2025 14:12:51 +0000 (0:00:01.287) 0:11:48.167 ***** 2025-11-01 14:13:02.579937 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:13:02.579941 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:13:02.579945 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:13:02.579949 | orchestrator | 2025-11-01 14:13:02.579953 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-11-01 14:13:02.579957 | orchestrator | Saturday 01 November 2025 14:12:52 +0000 (0:00:01.549) 0:11:49.717 ***** 2025-11-01 14:13:02.579961 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:13:02.579966 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:13:02.579970 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:13:02.579977 | orchestrator | 2025-11-01 14:13:02.579981 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-11-01 14:13:02.579985 | orchestrator | Saturday 01 November 2025 14:12:54 +0000 (0:00:01.846) 0:11:51.563 ***** 2025-11-01 14:13:02.579989 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-11-01 14:13:02.579993 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-11-01 14:13:02.579997 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-11-01 14:13:02.580001 | orchestrator | 2025-11-01 14:13:02.580005 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-11-01 14:13:02.580009 | orchestrator | Saturday 01 November 2025 14:12:57 +0000 (0:00:02.735) 0:11:54.299 ***** 2025-11-01 14:13:02.580013 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.580018 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.580022 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.580026 | orchestrator | 2025-11-01 14:13:02.580030 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-11-01 14:13:02.580034 | orchestrator | Saturday 01 November 2025 14:12:57 +0000 (0:00:00.381) 0:11:54.680 ***** 2025-11-01 14:13:02.580038 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:13:02.580042 | orchestrator | 2025-11-01 14:13:02.580046 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-11-01 14:13:02.580050 | orchestrator | Saturday 01 November 2025 14:12:58 +0000 (0:00:00.573) 0:11:55.253 ***** 2025-11-01 14:13:02.580054 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.580058 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.580062 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.580066 | orchestrator | 2025-11-01 14:13:02.580071 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-11-01 14:13:02.580075 | orchestrator | Saturday 01 November 2025 14:12:59 +0000 (0:00:00.610) 0:11:55.863 ***** 2025-11-01 14:13:02.580079 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.580083 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:13:02.580087 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:13:02.580091 | orchestrator | 2025-11-01 14:13:02.580095 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-11-01 14:13:02.580099 | orchestrator | Saturday 01 November 2025 14:12:59 +0000 (0:00:00.378) 0:11:56.242 ***** 2025-11-01 14:13:02.580103 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 14:13:02.580107 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 14:13:02.580111 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 14:13:02.580115 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:13:02.580119 | orchestrator | 2025-11-01 14:13:02.580123 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-11-01 14:13:02.580128 | orchestrator | Saturday 01 November 2025 14:13:00 +0000 (0:00:00.700) 0:11:56.942 ***** 2025-11-01 14:13:02.580132 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:13:02.580136 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:13:02.580140 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:13:02.580144 | orchestrator | 2025-11-01 14:13:02.580148 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:13:02.580152 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-11-01 14:13:02.580156 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-11-01 14:13:02.580160 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-11-01 14:13:02.580168 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-11-01 14:13:02.580175 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-11-01 14:13:02.580181 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-11-01 14:13:02.580186 | orchestrator | 2025-11-01 14:13:02.580190 | orchestrator | 2025-11-01 14:13:02.580194 | orchestrator | 2025-11-01 14:13:02.580198 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:13:02.580202 | orchestrator | Saturday 01 November 2025 14:13:00 +0000 (0:00:00.324) 0:11:57.267 ***** 2025-11-01 14:13:02.580206 | orchestrator | =============================================================================== 2025-11-01 14:13:02.580210 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 46.22s 2025-11-01 14:13:02.580214 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 43.56s 2025-11-01 14:13:02.580218 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.90s 2025-11-01 14:13:02.580222 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.53s 2025-11-01 14:13:02.580226 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.14s 2025-11-01 14:13:02.580230 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.72s 2025-11-01 14:13:02.580235 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.65s 2025-11-01 14:13:02.580239 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.50s 2025-11-01 14:13:02.580243 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.41s 2025-11-01 14:13:02.580247 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.41s 2025-11-01 14:13:02.580251 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.62s 2025-11-01 14:13:02.580255 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.94s 2025-11-01 14:13:02.580259 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 5.01s 2025-11-01 14:13:02.580263 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.72s 2025-11-01 14:13:02.580267 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.59s 2025-11-01 14:13:02.580271 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 4.18s 2025-11-01 14:13:02.580275 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.16s 2025-11-01 14:13:02.580279 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 3.93s 2025-11-01 14:13:02.580283 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.74s 2025-11-01 14:13:02.580287 | orchestrator | ceph-config : Generate Ceph file ---------------------------------------- 3.59s 2025-11-01 14:13:02.580291 | orchestrator | 2025-11-01 14:13:02 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:13:02.580296 | orchestrator | 2025-11-01 14:13:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:13:05.625339 | orchestrator | 2025-11-01 14:13:05 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:13:05.627948 | orchestrator | 2025-11-01 14:13:05 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:13:05.630124 | orchestrator | 2025-11-01 14:13:05 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:13:05.630404 | orchestrator | 2025-11-01 14:13:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:13:08.684834 | orchestrator | 2025-11-01 14:13:08 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:13:08.687879 | orchestrator | 2025-11-01 14:13:08 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:13:08.691951 | orchestrator | 2025-11-01 14:13:08 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:13:08.692064 | orchestrator | 2025-11-01 14:13:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:13:11.748904 | orchestrator | 2025-11-01 14:13:11 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:13:11.752100 | orchestrator | 2025-11-01 14:13:11 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:13:11.755972 | orchestrator | 2025-11-01 14:13:11 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:13:11.755997 | orchestrator | 2025-11-01 14:13:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:13:14.808650 | orchestrator | 2025-11-01 14:13:14 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:13:14.813280 | orchestrator | 2025-11-01 14:13:14 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:13:14.816124 | orchestrator | 2025-11-01 14:13:14 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:13:14.816385 | orchestrator | 2025-11-01 14:13:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:13:17.859399 | orchestrator | 2025-11-01 14:13:17 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:13:17.861543 | orchestrator | 2025-11-01 14:13:17 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:13:17.866317 | orchestrator | 2025-11-01 14:13:17 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:13:17.866342 | orchestrator | 2025-11-01 14:13:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:13:20.915829 | orchestrator | 2025-11-01 14:13:20 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:13:20.916172 | orchestrator | 2025-11-01 14:13:20 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:13:20.917671 | orchestrator | 2025-11-01 14:13:20 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:13:20.917692 | orchestrator | 2025-11-01 14:13:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:13:23.965701 | orchestrator | 2025-11-01 14:13:23 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:13:23.969727 | orchestrator | 2025-11-01 14:13:23 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:13:23.972216 | orchestrator | 2025-11-01 14:13:23 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:13:23.972889 | orchestrator | 2025-11-01 14:13:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:13:27.021823 | orchestrator | 2025-11-01 14:13:27 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:13:27.024027 | orchestrator | 2025-11-01 14:13:27 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:13:27.026685 | orchestrator | 2025-11-01 14:13:27 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:13:27.026775 | orchestrator | 2025-11-01 14:13:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:13:30.072013 | orchestrator | 2025-11-01 14:13:30 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:13:30.074400 | orchestrator | 2025-11-01 14:13:30 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:13:30.076641 | orchestrator | 2025-11-01 14:13:30 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:13:30.077133 | orchestrator | 2025-11-01 14:13:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:13:33.120748 | orchestrator | 2025-11-01 14:13:33 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:13:33.121084 | orchestrator | 2025-11-01 14:13:33 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:13:33.123718 | orchestrator | 2025-11-01 14:13:33 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:13:33.123744 | orchestrator | 2025-11-01 14:13:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:13:36.206252 | orchestrator | 2025-11-01 14:13:36 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:13:36.207448 | orchestrator | 2025-11-01 14:13:36 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:13:36.208805 | orchestrator | 2025-11-01 14:13:36 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:13:36.208874 | orchestrator | 2025-11-01 14:13:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:13:39.258756 | orchestrator | 2025-11-01 14:13:39 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:13:39.261344 | orchestrator | 2025-11-01 14:13:39 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:13:39.263254 | orchestrator | 2025-11-01 14:13:39 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:13:39.263458 | orchestrator | 2025-11-01 14:13:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:13:42.308476 | orchestrator | 2025-11-01 14:13:42 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:13:42.309244 | orchestrator | 2025-11-01 14:13:42 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:13:42.311072 | orchestrator | 2025-11-01 14:13:42 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:13:42.311111 | orchestrator | 2025-11-01 14:13:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:13:45.364735 | orchestrator | 2025-11-01 14:13:45 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:13:45.366474 | orchestrator | 2025-11-01 14:13:45 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state STARTED 2025-11-01 14:13:45.368216 | orchestrator | 2025-11-01 14:13:45 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:13:45.368239 | orchestrator | 2025-11-01 14:13:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:13:48.421856 | orchestrator | 2025-11-01 14:13:48 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:13:48.425878 | orchestrator | 2025-11-01 14:13:48 | INFO  | Task 80d45b28-759c-482c-a2aa-7c317f29651c is in state SUCCESS 2025-11-01 14:13:48.427905 | orchestrator | 2025-11-01 14:13:48.427936 | orchestrator | 2025-11-01 14:13:48.427945 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 14:13:48.427954 | orchestrator | 2025-11-01 14:13:48.427962 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 14:13:48.427969 | orchestrator | Saturday 01 November 2025 14:10:40 +0000 (0:00:00.272) 0:00:00.272 ***** 2025-11-01 14:13:48.428084 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:48.428095 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:13:48.428103 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:13:48.428110 | orchestrator | 2025-11-01 14:13:48.428117 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 14:13:48.428124 | orchestrator | Saturday 01 November 2025 14:10:41 +0000 (0:00:00.307) 0:00:00.580 ***** 2025-11-01 14:13:48.428132 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-11-01 14:13:48.428140 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-11-01 14:13:48.428147 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-11-01 14:13:48.428154 | orchestrator | 2025-11-01 14:13:48.428161 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-11-01 14:13:48.428168 | orchestrator | 2025-11-01 14:13:48.428176 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-11-01 14:13:48.428183 | orchestrator | Saturday 01 November 2025 14:10:41 +0000 (0:00:00.468) 0:00:01.048 ***** 2025-11-01 14:13:48.428190 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:13:48.428198 | orchestrator | 2025-11-01 14:13:48.428206 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-11-01 14:13:48.428213 | orchestrator | Saturday 01 November 2025 14:10:42 +0000 (0:00:00.523) 0:00:01.572 ***** 2025-11-01 14:13:48.428220 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-11-01 14:13:48.428227 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-11-01 14:13:48.428235 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-11-01 14:13:48.428242 | orchestrator | 2025-11-01 14:13:48.428249 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-11-01 14:13:48.428256 | orchestrator | Saturday 01 November 2025 14:10:42 +0000 (0:00:00.713) 0:00:02.285 ***** 2025-11-01 14:13:48.428269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-01 14:13:48.428281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-01 14:13:48.428309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-01 14:13:48.428326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-01 14:13:48.428336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-01 14:13:48.428345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-01 14:13:48.428353 | orchestrator | 2025-11-01 14:13:48.428360 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-11-01 14:13:48.428368 | orchestrator | Saturday 01 November 2025 14:10:45 +0000 (0:00:02.133) 0:00:04.419 ***** 2025-11-01 14:13:48.428381 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:13:48.428388 | orchestrator | 2025-11-01 14:13:48.428395 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-11-01 14:13:48.428403 | orchestrator | Saturday 01 November 2025 14:10:45 +0000 (0:00:00.582) 0:00:05.002 ***** 2025-11-01 14:13:48.428419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-01 14:13:48.428427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-01 14:13:48.428435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-01 14:13:48.428554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-01 14:13:48.428589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-01 14:13:48.428599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-01 14:13:48.428606 | orchestrator | 2025-11-01 14:13:48.428614 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-11-01 14:13:48.428621 | orchestrator | Saturday 01 November 2025 14:10:48 +0000 (0:00:02.985) 0:00:07.987 ***** 2025-11-01 14:13:48.428629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-01 14:13:48.428637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-01 14:13:48.428650 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:48.428661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-01 14:13:48.428674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-01 14:13:48.428682 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:48.428690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-01 14:13:48.428698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-01 14:13:48.428711 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:48.428720 | orchestrator | 2025-11-01 14:13:48.428728 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-11-01 14:13:48.428736 | orchestrator | Saturday 01 November 2025 14:10:50 +0000 (0:00:01.393) 0:00:09.381 ***** 2025-11-01 14:13:48.428748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-01 14:13:48.428763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-01 14:13:48.428772 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:48.428780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-01 14:13:48.428789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-01 14:13:48.428801 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:48.428813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-01 14:13:48.428828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-01 14:13:48.428836 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:48.428845 | orchestrator | 2025-11-01 14:13:48.428853 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-11-01 14:13:48.428861 | orchestrator | Saturday 01 November 2025 14:10:51 +0000 (0:00:01.349) 0:00:10.731 ***** 2025-11-01 14:13:48.428869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-01 14:13:48.428878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-01 14:13:48.428894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-01 14:13:48.428978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-01 14:13:48.428991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-01 14:13:48.429000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-01 14:13:48.429014 | orchestrator | 2025-11-01 14:13:48.429022 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-11-01 14:13:48.429030 | orchestrator | Saturday 01 November 2025 14:10:53 +0000 (0:00:02.553) 0:00:13.284 ***** 2025-11-01 14:13:48.429039 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:48.429047 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:48.429055 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:48.429062 | orchestrator | 2025-11-01 14:13:48.429069 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-11-01 14:13:48.429076 | orchestrator | Saturday 01 November 2025 14:10:57 +0000 (0:00:03.414) 0:00:16.699 ***** 2025-11-01 14:13:48.429083 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:48.429090 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:48.429097 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:48.429104 | orchestrator | 2025-11-01 14:13:48.429111 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-11-01 14:13:48.429119 | orchestrator | Saturday 01 November 2025 14:10:59 +0000 (0:00:02.085) 0:00:18.784 ***** 2025-11-01 14:13:48.429192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-01 14:13:48.429206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-01 14:13:48.429215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-01 14:13:48.429229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-01 14:13:48.429243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-01 14:13:48.429257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-01 14:13:48.429265 | orchestrator | 2025-11-01 14:13:48.429273 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-11-01 14:13:48.429280 | orchestrator | Saturday 01 November 2025 14:11:01 +0000 (0:00:02.238) 0:00:21.023 ***** 2025-11-01 14:13:48.429287 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:48.429295 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:13:48.429302 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:13:48.429309 | orchestrator | 2025-11-01 14:13:48.429316 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-11-01 14:13:48.429323 | orchestrator | Saturday 01 November 2025 14:11:02 +0000 (0:00:00.355) 0:00:21.378 ***** 2025-11-01 14:13:48.429330 | orchestrator | 2025-11-01 14:13:48.429337 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-11-01 14:13:48.429350 | orchestrator | Saturday 01 November 2025 14:11:02 +0000 (0:00:00.084) 0:00:21.463 ***** 2025-11-01 14:13:48.429358 | orchestrator | 2025-11-01 14:13:48.429365 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-11-01 14:13:48.429372 | orchestrator | Saturday 01 November 2025 14:11:02 +0000 (0:00:00.067) 0:00:21.530 ***** 2025-11-01 14:13:48.429379 | orchestrator | 2025-11-01 14:13:48.429386 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-11-01 14:13:48.429393 | orchestrator | Saturday 01 November 2025 14:11:02 +0000 (0:00:00.068) 0:00:21.599 ***** 2025-11-01 14:13:48.429400 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:48.429407 | orchestrator | 2025-11-01 14:13:48.429414 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-11-01 14:13:48.429421 | orchestrator | Saturday 01 November 2025 14:11:02 +0000 (0:00:00.262) 0:00:21.862 ***** 2025-11-01 14:13:48.429428 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:13:48.429435 | orchestrator | 2025-11-01 14:13:48.429443 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-11-01 14:13:48.429450 | orchestrator | Saturday 01 November 2025 14:11:03 +0000 (0:00:00.690) 0:00:22.552 ***** 2025-11-01 14:13:48.429457 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:48.429464 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:48.429471 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:48.429478 | orchestrator | 2025-11-01 14:13:48.429485 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-11-01 14:13:48.429492 | orchestrator | Saturday 01 November 2025 14:12:09 +0000 (0:01:06.434) 0:01:28.987 ***** 2025-11-01 14:13:48.429499 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:48.429526 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:13:48.429533 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:13:48.429540 | orchestrator | 2025-11-01 14:13:48.429548 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-11-01 14:13:48.429555 | orchestrator | Saturday 01 November 2025 14:13:33 +0000 (0:01:23.676) 0:02:52.663 ***** 2025-11-01 14:13:48.429562 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:13:48.429569 | orchestrator | 2025-11-01 14:13:48.429576 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-11-01 14:13:48.429583 | orchestrator | Saturday 01 November 2025 14:13:34 +0000 (0:00:00.790) 0:02:53.454 ***** 2025-11-01 14:13:48.429590 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:48.429598 | orchestrator | 2025-11-01 14:13:48.429605 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-11-01 14:13:48.429612 | orchestrator | Saturday 01 November 2025 14:13:36 +0000 (0:00:02.664) 0:02:56.119 ***** 2025-11-01 14:13:48.429619 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:13:48.429626 | orchestrator | 2025-11-01 14:13:48.429633 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-11-01 14:13:48.429640 | orchestrator | Saturday 01 November 2025 14:13:39 +0000 (0:00:02.561) 0:02:58.680 ***** 2025-11-01 14:13:48.429647 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:48.429654 | orchestrator | 2025-11-01 14:13:48.429661 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-11-01 14:13:48.429672 | orchestrator | Saturday 01 November 2025 14:13:42 +0000 (0:00:02.968) 0:03:01.649 ***** 2025-11-01 14:13:48.429679 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:13:48.429686 | orchestrator | 2025-11-01 14:13:48.429693 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:13:48.429701 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-01 14:13:48.429709 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-01 14:13:48.429716 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-01 14:13:48.429728 | orchestrator | 2025-11-01 14:13:48.429735 | orchestrator | 2025-11-01 14:13:48.429742 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:13:48.429753 | orchestrator | Saturday 01 November 2025 14:13:44 +0000 (0:00:02.670) 0:03:04.319 ***** 2025-11-01 14:13:48.429761 | orchestrator | =============================================================================== 2025-11-01 14:13:48.429768 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 83.68s 2025-11-01 14:13:48.429775 | orchestrator | opensearch : Restart opensearch container ------------------------------ 66.43s 2025-11-01 14:13:48.429782 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.41s 2025-11-01 14:13:48.429789 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.99s 2025-11-01 14:13:48.429796 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.97s 2025-11-01 14:13:48.429803 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.67s 2025-11-01 14:13:48.429810 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.66s 2025-11-01 14:13:48.429817 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.56s 2025-11-01 14:13:48.429825 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.55s 2025-11-01 14:13:48.429832 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.24s 2025-11-01 14:13:48.429839 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.13s 2025-11-01 14:13:48.429846 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.09s 2025-11-01 14:13:48.429853 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.39s 2025-11-01 14:13:48.429860 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.35s 2025-11-01 14:13:48.429867 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.79s 2025-11-01 14:13:48.429874 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.71s 2025-11-01 14:13:48.429881 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.69s 2025-11-01 14:13:48.429888 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.58s 2025-11-01 14:13:48.429895 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2025-11-01 14:13:48.429903 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2025-11-01 14:13:48.429910 | orchestrator | 2025-11-01 14:13:48 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:13:48.429917 | orchestrator | 2025-11-01 14:13:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:13:51.469638 | orchestrator | 2025-11-01 14:13:51 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:13:51.470776 | orchestrator | 2025-11-01 14:13:51 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:13:51.470857 | orchestrator | 2025-11-01 14:13:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:13:54.515623 | orchestrator | 2025-11-01 14:13:54 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:13:54.516077 | orchestrator | 2025-11-01 14:13:54 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:13:54.516237 | orchestrator | 2025-11-01 14:13:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:13:57.560248 | orchestrator | 2025-11-01 14:13:57 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state STARTED 2025-11-01 14:13:57.562762 | orchestrator | 2025-11-01 14:13:57 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:13:57.563597 | orchestrator | 2025-11-01 14:13:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:14:00.613983 | orchestrator | 2025-11-01 14:14:00 | INFO  | Task f3bfde81-bfdc-4d18-8232-cc4fb407d910 is in state SUCCESS 2025-11-01 14:14:00.615053 | orchestrator | 2025-11-01 14:14:00.615272 | orchestrator | 2025-11-01 14:14:00.615285 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-11-01 14:14:00.615296 | orchestrator | 2025-11-01 14:14:00.615306 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-11-01 14:14:00.615332 | orchestrator | Saturday 01 November 2025 14:10:40 +0000 (0:00:00.109) 0:00:00.109 ***** 2025-11-01 14:14:00.615343 | orchestrator | ok: [localhost] => { 2025-11-01 14:14:00.615354 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-11-01 14:14:00.615365 | orchestrator | } 2025-11-01 14:14:00.615375 | orchestrator | 2025-11-01 14:14:00.615384 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-11-01 14:14:00.615394 | orchestrator | Saturday 01 November 2025 14:10:40 +0000 (0:00:00.063) 0:00:00.173 ***** 2025-11-01 14:14:00.615404 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-11-01 14:14:00.615416 | orchestrator | ...ignoring 2025-11-01 14:14:00.615426 | orchestrator | 2025-11-01 14:14:00.615435 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-11-01 14:14:00.615445 | orchestrator | Saturday 01 November 2025 14:10:43 +0000 (0:00:02.952) 0:00:03.125 ***** 2025-11-01 14:14:00.615455 | orchestrator | skipping: [localhost] 2025-11-01 14:14:00.615464 | orchestrator | 2025-11-01 14:14:00.615474 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-11-01 14:14:00.615484 | orchestrator | Saturday 01 November 2025 14:10:43 +0000 (0:00:00.072) 0:00:03.198 ***** 2025-11-01 14:14:00.615493 | orchestrator | ok: [localhost] 2025-11-01 14:14:00.615503 | orchestrator | 2025-11-01 14:14:00.615549 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 14:14:00.615559 | orchestrator | 2025-11-01 14:14:00.615569 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 14:14:00.615578 | orchestrator | Saturday 01 November 2025 14:10:44 +0000 (0:00:00.230) 0:00:03.429 ***** 2025-11-01 14:14:00.615588 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:14:00.615598 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:14:00.615607 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:14:00.615617 | orchestrator | 2025-11-01 14:14:00.615627 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 14:14:00.615636 | orchestrator | Saturday 01 November 2025 14:10:44 +0000 (0:00:00.340) 0:00:03.770 ***** 2025-11-01 14:14:00.615646 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-11-01 14:14:00.615656 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-11-01 14:14:00.615666 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-11-01 14:14:00.615675 | orchestrator | 2025-11-01 14:14:00.615685 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-11-01 14:14:00.615695 | orchestrator | 2025-11-01 14:14:00.615705 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-11-01 14:14:00.615714 | orchestrator | Saturday 01 November 2025 14:10:45 +0000 (0:00:00.786) 0:00:04.556 ***** 2025-11-01 14:14:00.615724 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-11-01 14:14:00.615735 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-11-01 14:14:00.615745 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-11-01 14:14:00.615754 | orchestrator | 2025-11-01 14:14:00.615764 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-11-01 14:14:00.615774 | orchestrator | Saturday 01 November 2025 14:10:45 +0000 (0:00:00.395) 0:00:04.952 ***** 2025-11-01 14:14:00.615805 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:14:00.615816 | orchestrator | 2025-11-01 14:14:00.615826 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-11-01 14:14:00.615836 | orchestrator | Saturday 01 November 2025 14:10:46 +0000 (0:00:00.627) 0:00:05.580 ***** 2025-11-01 14:14:00.615868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-01 14:14:00.615885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-01 14:14:00.615907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-01 14:14:00.615919 | orchestrator | 2025-11-01 14:14:00.615938 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-11-01 14:14:00.615949 | orchestrator | Saturday 01 November 2025 14:10:49 +0000 (0:00:03.354) 0:00:08.934 ***** 2025-11-01 14:14:00.615960 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:14:00.615971 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:14:00.615987 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:14:00.615998 | orchestrator | 2025-11-01 14:14:00.616009 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-11-01 14:14:00.616019 | orchestrator | Saturday 01 November 2025 14:10:50 +0000 (0:00:00.860) 0:00:09.795 ***** 2025-11-01 14:14:00.616031 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:14:00.616042 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:14:00.616053 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:14:00.616063 | orchestrator | 2025-11-01 14:14:00.616074 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-11-01 14:14:00.616085 | orchestrator | Saturday 01 November 2025 14:10:52 +0000 (0:00:01.777) 0:00:11.573 ***** 2025-11-01 14:14:00.616097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-01 14:14:00.616122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-01 14:14:00.616149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-01 14:14:00.616167 | orchestrator | 2025-11-01 14:14:00.616179 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-11-01 14:14:00.616190 | orchestrator | Saturday 01 November 2025 14:10:56 +0000 (0:00:04.572) 0:00:16.145 ***** 2025-11-01 14:14:00.616201 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:14:00.616211 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:14:00.616223 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:14:00.616234 | orchestrator | 2025-11-01 14:14:00.616245 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-11-01 14:14:00.616254 | orchestrator | Saturday 01 November 2025 14:10:58 +0000 (0:00:01.176) 0:00:17.322 ***** 2025-11-01 14:14:00.616264 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:14:00.616273 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:14:00.616283 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:14:00.616292 | orchestrator | 2025-11-01 14:14:00.616302 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-11-01 14:14:00.616312 | orchestrator | Saturday 01 November 2025 14:11:02 +0000 (0:00:04.691) 0:00:22.013 ***** 2025-11-01 14:14:00.616321 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:14:00.616331 | orchestrator | 2025-11-01 14:14:00.616340 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-11-01 14:14:00.616350 | orchestrator | Saturday 01 November 2025 14:11:03 +0000 (0:00:00.584) 0:00:22.598 ***** 2025-11-01 14:14:00.616372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-01 14:14:00.616384 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:14:00.616395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-01 14:14:00.616412 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:14:00.616433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-01 14:14:00.616444 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:14:00.616454 | orchestrator | 2025-11-01 14:14:00.616463 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-11-01 14:14:00.616473 | orchestrator | Saturday 01 November 2025 14:11:07 +0000 (0:00:04.563) 0:00:27.162 ***** 2025-11-01 14:14:00.616483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-01 14:14:00.616500 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:14:00.616546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-01 14:14:00.616558 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:14:00.616573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-01 14:14:00.616591 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:14:00.616601 | orchestrator | 2025-11-01 14:14:00.616611 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-11-01 14:14:00.616620 | orchestrator | Saturday 01 November 2025 14:11:11 +0000 (0:00:03.549) 0:00:30.712 ***** 2025-11-01 14:14:00.616630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-01 14:14:00.616641 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:14:00.616663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-01 14:14:00.616680 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:14:00.616691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-01 14:14:00.616701 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:14:00.616711 | orchestrator | 2025-11-01 14:14:00.616720 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-11-01 14:14:00.616730 | orchestrator | Saturday 01 November 2025 14:11:15 +0000 (0:00:04.232) 0:00:34.945 ***** 2025-11-01 14:14:00.616752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-01 14:14:00.616769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-01 14:14:00.616793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-01 14:14:00.616811 | orchestrator | 2025-11-01 14:14:00.616821 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-11-01 14:14:00.616831 | orchestrator | Saturday 01 November 2025 14:11:19 +0000 (0:00:03.442) 0:00:38.387 ***** 2025-11-01 14:14:00.616841 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:14:00.616850 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:14:00.616860 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:14:00.616869 | orchestrator | 2025-11-01 14:14:00.616879 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-11-01 14:14:00.616889 | orchestrator | Saturday 01 November 2025 14:11:20 +0000 (0:00:00.842) 0:00:39.230 ***** 2025-11-01 14:14:00.616898 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:14:00.616908 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:14:00.616917 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:14:00.616927 | orchestrator | 2025-11-01 14:14:00.616936 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-11-01 14:14:00.616946 | orchestrator | Saturday 01 November 2025 14:11:20 +0000 (0:00:00.513) 0:00:39.743 ***** 2025-11-01 14:14:00.616955 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:14:00.616965 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:14:00.616974 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:14:00.616983 | orchestrator | 2025-11-01 14:14:00.616993 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-11-01 14:14:00.617003 | orchestrator | Saturday 01 November 2025 14:11:20 +0000 (0:00:00.336) 0:00:40.079 ***** 2025-11-01 14:14:00.617013 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-11-01 14:14:00.617023 | orchestrator | ...ignoring 2025-11-01 14:14:00.617033 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-11-01 14:14:00.617042 | orchestrator | ...ignoring 2025-11-01 14:14:00.617052 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-11-01 14:14:00.617062 | orchestrator | ...ignoring 2025-11-01 14:14:00.617071 | orchestrator | 2025-11-01 14:14:00.617081 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-11-01 14:14:00.617091 | orchestrator | Saturday 01 November 2025 14:11:31 +0000 (0:00:10.885) 0:00:50.965 ***** 2025-11-01 14:14:00.617100 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:14:00.617109 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:14:00.617119 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:14:00.617128 | orchestrator | 2025-11-01 14:14:00.617138 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-11-01 14:14:00.617147 | orchestrator | Saturday 01 November 2025 14:11:32 +0000 (0:00:00.494) 0:00:51.460 ***** 2025-11-01 14:14:00.617157 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:14:00.617167 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:14:00.617176 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:14:00.617186 | orchestrator | 2025-11-01 14:14:00.617195 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-11-01 14:14:00.617211 | orchestrator | Saturday 01 November 2025 14:11:32 +0000 (0:00:00.705) 0:00:52.165 ***** 2025-11-01 14:14:00.617221 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:14:00.617230 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:14:00.617240 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:14:00.617249 | orchestrator | 2025-11-01 14:14:00.617259 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-11-01 14:14:00.617268 | orchestrator | Saturday 01 November 2025 14:11:33 +0000 (0:00:00.510) 0:00:52.675 ***** 2025-11-01 14:14:00.617278 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:14:00.617287 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:14:00.617296 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:14:00.617306 | orchestrator | 2025-11-01 14:14:00.617315 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-11-01 14:14:00.617325 | orchestrator | Saturday 01 November 2025 14:11:33 +0000 (0:00:00.419) 0:00:53.095 ***** 2025-11-01 14:14:00.617334 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:14:00.617344 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:14:00.617353 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:14:00.617363 | orchestrator | 2025-11-01 14:14:00.617372 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-11-01 14:14:00.617382 | orchestrator | Saturday 01 November 2025 14:11:34 +0000 (0:00:00.487) 0:00:53.582 ***** 2025-11-01 14:14:00.617396 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:14:00.617406 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:14:00.617415 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:14:00.617425 | orchestrator | 2025-11-01 14:14:00.617434 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-11-01 14:14:00.617451 | orchestrator | Saturday 01 November 2025 14:11:35 +0000 (0:00:00.720) 0:00:54.302 ***** 2025-11-01 14:14:00.617461 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:14:00.617471 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:14:00.617481 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-11-01 14:14:00.617490 | orchestrator | 2025-11-01 14:14:00.617500 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-11-01 14:14:00.617523 | orchestrator | Saturday 01 November 2025 14:11:35 +0000 (0:00:00.422) 0:00:54.725 ***** 2025-11-01 14:14:00.617533 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:14:00.617542 | orchestrator | 2025-11-01 14:14:00.617552 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-11-01 14:14:00.617561 | orchestrator | Saturday 01 November 2025 14:11:46 +0000 (0:00:10.666) 0:01:05.391 ***** 2025-11-01 14:14:00.617570 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:14:00.617580 | orchestrator | 2025-11-01 14:14:00.617589 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-11-01 14:14:00.617599 | orchestrator | Saturday 01 November 2025 14:11:46 +0000 (0:00:00.151) 0:01:05.542 ***** 2025-11-01 14:14:00.617608 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:14:00.617618 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:14:00.617627 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:14:00.617637 | orchestrator | 2025-11-01 14:14:00.617646 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-11-01 14:14:00.617656 | orchestrator | Saturday 01 November 2025 14:11:47 +0000 (0:00:01.077) 0:01:06.620 ***** 2025-11-01 14:14:00.617665 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:14:00.617675 | orchestrator | 2025-11-01 14:14:00.617685 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-11-01 14:14:00.617694 | orchestrator | Saturday 01 November 2025 14:11:55 +0000 (0:00:08.346) 0:01:14.967 ***** 2025-11-01 14:14:00.617704 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:14:00.617713 | orchestrator | 2025-11-01 14:14:00.617723 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-11-01 14:14:00.617733 | orchestrator | Saturday 01 November 2025 14:11:58 +0000 (0:00:02.596) 0:01:17.564 ***** 2025-11-01 14:14:00.617748 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:14:00.617758 | orchestrator | 2025-11-01 14:14:00.617767 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-11-01 14:14:00.617777 | orchestrator | Saturday 01 November 2025 14:12:01 +0000 (0:00:02.741) 0:01:20.305 ***** 2025-11-01 14:14:00.617786 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:14:00.617796 | orchestrator | 2025-11-01 14:14:00.617806 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-11-01 14:14:00.617815 | orchestrator | Saturday 01 November 2025 14:12:01 +0000 (0:00:00.135) 0:01:20.441 ***** 2025-11-01 14:14:00.617825 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:14:00.617834 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:14:00.617843 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:14:00.617853 | orchestrator | 2025-11-01 14:14:00.617862 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-11-01 14:14:00.617872 | orchestrator | Saturday 01 November 2025 14:12:01 +0000 (0:00:00.314) 0:01:20.755 ***** 2025-11-01 14:14:00.617881 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:14:00.617891 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-11-01 14:14:00.617901 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:14:00.617910 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:14:00.617920 | orchestrator | 2025-11-01 14:14:00.617929 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-11-01 14:14:00.617939 | orchestrator | skipping: no hosts matched 2025-11-01 14:14:00.617948 | orchestrator | 2025-11-01 14:14:00.617957 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-11-01 14:14:00.617967 | orchestrator | 2025-11-01 14:14:00.617976 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-11-01 14:14:00.617986 | orchestrator | Saturday 01 November 2025 14:12:02 +0000 (0:00:00.604) 0:01:21.360 ***** 2025-11-01 14:14:00.617995 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:14:00.618005 | orchestrator | 2025-11-01 14:14:00.618055 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-11-01 14:14:00.618068 | orchestrator | Saturday 01 November 2025 14:12:25 +0000 (0:00:23.631) 0:01:44.991 ***** 2025-11-01 14:14:00.618078 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:14:00.618087 | orchestrator | 2025-11-01 14:14:00.618097 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-11-01 14:14:00.618106 | orchestrator | Saturday 01 November 2025 14:12:41 +0000 (0:00:15.642) 0:02:00.634 ***** 2025-11-01 14:14:00.618116 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:14:00.618125 | orchestrator | 2025-11-01 14:14:00.618135 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-11-01 14:14:00.618144 | orchestrator | 2025-11-01 14:14:00.618154 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-11-01 14:14:00.618163 | orchestrator | Saturday 01 November 2025 14:12:44 +0000 (0:00:02.646) 0:02:03.280 ***** 2025-11-01 14:14:00.618173 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:14:00.618182 | orchestrator | 2025-11-01 14:14:00.618192 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-11-01 14:14:00.618201 | orchestrator | Saturday 01 November 2025 14:13:01 +0000 (0:00:17.590) 0:02:20.871 ***** 2025-11-01 14:14:00.618210 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:14:00.618220 | orchestrator | 2025-11-01 14:14:00.618229 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-11-01 14:14:00.618239 | orchestrator | Saturday 01 November 2025 14:13:22 +0000 (0:00:20.614) 0:02:41.486 ***** 2025-11-01 14:14:00.618248 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:14:00.618258 | orchestrator | 2025-11-01 14:14:00.618267 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-11-01 14:14:00.618277 | orchestrator | 2025-11-01 14:14:00.618292 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-11-01 14:14:00.618302 | orchestrator | Saturday 01 November 2025 14:13:25 +0000 (0:00:02.742) 0:02:44.229 ***** 2025-11-01 14:14:00.618317 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:14:00.618327 | orchestrator | 2025-11-01 14:14:00.618341 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-11-01 14:14:00.618351 | orchestrator | Saturday 01 November 2025 14:13:38 +0000 (0:00:13.413) 0:02:57.642 ***** 2025-11-01 14:14:00.618361 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:14:00.618370 | orchestrator | 2025-11-01 14:14:00.618380 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-11-01 14:14:00.618389 | orchestrator | Saturday 01 November 2025 14:13:43 +0000 (0:00:04.627) 0:03:02.270 ***** 2025-11-01 14:14:00.618398 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:14:00.618408 | orchestrator | 2025-11-01 14:14:00.618417 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-11-01 14:14:00.618427 | orchestrator | 2025-11-01 14:14:00.618436 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-11-01 14:14:00.618445 | orchestrator | Saturday 01 November 2025 14:13:46 +0000 (0:00:03.036) 0:03:05.306 ***** 2025-11-01 14:14:00.618455 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:14:00.618464 | orchestrator | 2025-11-01 14:14:00.618474 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-11-01 14:14:00.618483 | orchestrator | Saturday 01 November 2025 14:13:46 +0000 (0:00:00.572) 0:03:05.879 ***** 2025-11-01 14:14:00.618493 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:14:00.618502 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:14:00.618560 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:14:00.618571 | orchestrator | 2025-11-01 14:14:00.618581 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-11-01 14:14:00.618590 | orchestrator | Saturday 01 November 2025 14:13:49 +0000 (0:00:02.460) 0:03:08.339 ***** 2025-11-01 14:14:00.618600 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:14:00.618610 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:14:00.618619 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:14:00.618629 | orchestrator | 2025-11-01 14:14:00.618638 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-11-01 14:14:00.618648 | orchestrator | Saturday 01 November 2025 14:13:51 +0000 (0:00:02.508) 0:03:10.847 ***** 2025-11-01 14:14:00.618657 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:14:00.618667 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:14:00.618676 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:14:00.618686 | orchestrator | 2025-11-01 14:14:00.618695 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-11-01 14:14:00.618704 | orchestrator | Saturday 01 November 2025 14:13:54 +0000 (0:00:02.546) 0:03:13.394 ***** 2025-11-01 14:14:00.618714 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:14:00.618723 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:14:00.618733 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:14:00.618742 | orchestrator | 2025-11-01 14:14:00.618752 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-11-01 14:14:00.618761 | orchestrator | Saturday 01 November 2025 14:13:56 +0000 (0:00:02.365) 0:03:15.759 ***** 2025-11-01 14:14:00.618771 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:14:00.618780 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:14:00.618790 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:14:00.618799 | orchestrator | 2025-11-01 14:14:00.618809 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-11-01 14:14:00.618818 | orchestrator | Saturday 01 November 2025 14:13:59 +0000 (0:00:03.358) 0:03:19.117 ***** 2025-11-01 14:14:00.618828 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:14:00.618837 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:14:00.618847 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:14:00.618856 | orchestrator | 2025-11-01 14:14:00.618865 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:14:00.618881 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-11-01 14:14:00.618892 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-11-01 14:14:00.618903 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-11-01 14:14:00.618912 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-11-01 14:14:00.618922 | orchestrator | 2025-11-01 14:14:00.618932 | orchestrator | 2025-11-01 14:14:00.618941 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:14:00.618951 | orchestrator | Saturday 01 November 2025 14:14:00 +0000 (0:00:00.257) 0:03:19.375 ***** 2025-11-01 14:14:00.618960 | orchestrator | =============================================================================== 2025-11-01 14:14:00.618970 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 41.22s 2025-11-01 14:14:00.618979 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.26s 2025-11-01 14:14:00.618989 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 13.41s 2025-11-01 14:14:00.618998 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.89s 2025-11-01 14:14:00.619008 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.67s 2025-11-01 14:14:00.619017 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.35s 2025-11-01 14:14:00.619032 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.39s 2025-11-01 14:14:00.619042 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.69s 2025-11-01 14:14:00.619052 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.63s 2025-11-01 14:14:00.619066 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.57s 2025-11-01 14:14:00.619076 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 4.56s 2025-11-01 14:14:00.619085 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 4.23s 2025-11-01 14:14:00.619095 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.55s 2025-11-01 14:14:00.619104 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.44s 2025-11-01 14:14:00.619114 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.36s 2025-11-01 14:14:00.619123 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.35s 2025-11-01 14:14:00.619133 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 3.04s 2025-11-01 14:14:00.619142 | orchestrator | Check MariaDB service --------------------------------------------------- 2.95s 2025-11-01 14:14:00.619152 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.74s 2025-11-01 14:14:00.619161 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 2.60s 2025-11-01 14:14:00.619171 | orchestrator | 2025-11-01 14:14:00 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:14:00.619180 | orchestrator | 2025-11-01 14:14:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:14:03.656229 | orchestrator | 2025-11-01 14:14:03 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:14:03.657921 | orchestrator | 2025-11-01 14:14:03 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:14:03.661097 | orchestrator | 2025-11-01 14:14:03 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:14:03.662090 | orchestrator | 2025-11-01 14:14:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:14:06.717099 | orchestrator | 2025-11-01 14:14:06 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:14:06.718699 | orchestrator | 2025-11-01 14:14:06 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:14:06.718732 | orchestrator | 2025-11-01 14:14:06 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:14:06.718744 | orchestrator | 2025-11-01 14:14:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:14:09.762231 | orchestrator | 2025-11-01 14:14:09 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:14:09.764436 | orchestrator | 2025-11-01 14:14:09 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:14:09.766662 | orchestrator | 2025-11-01 14:14:09 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:14:09.767012 | orchestrator | 2025-11-01 14:14:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:14:12.808835 | orchestrator | 2025-11-01 14:14:12 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:14:12.814600 | orchestrator | 2025-11-01 14:14:12 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:14:12.816649 | orchestrator | 2025-11-01 14:14:12 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:14:12.816910 | orchestrator | 2025-11-01 14:14:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:14:15.853794 | orchestrator | 2025-11-01 14:14:15 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:14:15.854383 | orchestrator | 2025-11-01 14:14:15 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:14:15.855275 | orchestrator | 2025-11-01 14:14:15 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:14:15.855303 | orchestrator | 2025-11-01 14:14:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:14:18.900309 | orchestrator | 2025-11-01 14:14:18 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:14:18.902566 | orchestrator | 2025-11-01 14:14:18 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:14:18.904982 | orchestrator | 2025-11-01 14:14:18 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:14:18.905314 | orchestrator | 2025-11-01 14:14:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:14:21.943470 | orchestrator | 2025-11-01 14:14:21 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:14:21.944450 | orchestrator | 2025-11-01 14:14:21 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:14:21.946501 | orchestrator | 2025-11-01 14:14:21 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:14:21.946550 | orchestrator | 2025-11-01 14:14:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:14:24.994735 | orchestrator | 2025-11-01 14:14:24 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:14:25.000638 | orchestrator | 2025-11-01 14:14:24 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:14:25.004873 | orchestrator | 2025-11-01 14:14:25 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:14:25.005323 | orchestrator | 2025-11-01 14:14:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:14:28.050183 | orchestrator | 2025-11-01 14:14:28 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:14:28.050259 | orchestrator | 2025-11-01 14:14:28 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:14:28.050272 | orchestrator | 2025-11-01 14:14:28 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:14:28.050284 | orchestrator | 2025-11-01 14:14:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:14:31.096659 | orchestrator | 2025-11-01 14:14:31 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:14:31.099160 | orchestrator | 2025-11-01 14:14:31 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:14:31.100834 | orchestrator | 2025-11-01 14:14:31 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:14:31.100856 | orchestrator | 2025-11-01 14:14:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:14:34.138232 | orchestrator | 2025-11-01 14:14:34 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:14:34.138334 | orchestrator | 2025-11-01 14:14:34 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:14:34.141454 | orchestrator | 2025-11-01 14:14:34 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:14:34.141478 | orchestrator | 2025-11-01 14:14:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:14:37.186432 | orchestrator | 2025-11-01 14:14:37 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:14:37.187947 | orchestrator | 2025-11-01 14:14:37 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:14:37.192572 | orchestrator | 2025-11-01 14:14:37 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:14:37.192599 | orchestrator | 2025-11-01 14:14:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:14:40.229386 | orchestrator | 2025-11-01 14:14:40 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:14:40.233396 | orchestrator | 2025-11-01 14:14:40 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:14:40.236902 | orchestrator | 2025-11-01 14:14:40 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:14:40.237182 | orchestrator | 2025-11-01 14:14:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:14:43.278206 | orchestrator | 2025-11-01 14:14:43 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:14:43.280039 | orchestrator | 2025-11-01 14:14:43 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:14:43.282122 | orchestrator | 2025-11-01 14:14:43 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:14:43.282147 | orchestrator | 2025-11-01 14:14:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:14:46.332612 | orchestrator | 2025-11-01 14:14:46 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:14:46.334374 | orchestrator | 2025-11-01 14:14:46 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:14:46.336323 | orchestrator | 2025-11-01 14:14:46 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:14:46.336634 | orchestrator | 2025-11-01 14:14:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:14:49.391560 | orchestrator | 2025-11-01 14:14:49 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:14:49.393929 | orchestrator | 2025-11-01 14:14:49 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:14:49.396355 | orchestrator | 2025-11-01 14:14:49 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:14:49.396396 | orchestrator | 2025-11-01 14:14:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:14:52.432665 | orchestrator | 2025-11-01 14:14:52 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:14:52.433328 | orchestrator | 2025-11-01 14:14:52 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:14:52.434868 | orchestrator | 2025-11-01 14:14:52 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:14:52.434890 | orchestrator | 2025-11-01 14:14:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:14:55.473070 | orchestrator | 2025-11-01 14:14:55 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:14:55.473186 | orchestrator | 2025-11-01 14:14:55 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:14:55.473809 | orchestrator | 2025-11-01 14:14:55 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:14:55.474096 | orchestrator | 2025-11-01 14:14:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:14:58.517238 | orchestrator | 2025-11-01 14:14:58 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:14:58.518128 | orchestrator | 2025-11-01 14:14:58 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:14:58.520229 | orchestrator | 2025-11-01 14:14:58 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:14:58.520252 | orchestrator | 2025-11-01 14:14:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:15:01.585603 | orchestrator | 2025-11-01 14:15:01 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:15:01.586491 | orchestrator | 2025-11-01 14:15:01 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:15:01.587425 | orchestrator | 2025-11-01 14:15:01 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:15:01.587456 | orchestrator | 2025-11-01 14:15:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:15:04.643440 | orchestrator | 2025-11-01 14:15:04 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:15:04.645028 | orchestrator | 2025-11-01 14:15:04 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:15:04.647426 | orchestrator | 2025-11-01 14:15:04 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:15:04.647450 | orchestrator | 2025-11-01 14:15:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:15:07.695640 | orchestrator | 2025-11-01 14:15:07 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:15:07.698419 | orchestrator | 2025-11-01 14:15:07 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:15:07.701175 | orchestrator | 2025-11-01 14:15:07 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:15:07.701499 | orchestrator | 2025-11-01 14:15:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:15:10.749047 | orchestrator | 2025-11-01 14:15:10 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:15:10.750772 | orchestrator | 2025-11-01 14:15:10 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:15:10.752332 | orchestrator | 2025-11-01 14:15:10 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:15:10.752365 | orchestrator | 2025-11-01 14:15:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:15:13.801763 | orchestrator | 2025-11-01 14:15:13 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:15:13.804971 | orchestrator | 2025-11-01 14:15:13 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:15:13.806373 | orchestrator | 2025-11-01 14:15:13 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:15:13.806398 | orchestrator | 2025-11-01 14:15:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:15:16.854183 | orchestrator | 2025-11-01 14:15:16 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:15:16.856728 | orchestrator | 2025-11-01 14:15:16 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:15:16.860617 | orchestrator | 2025-11-01 14:15:16 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:15:16.860645 | orchestrator | 2025-11-01 14:15:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:15:19.910121 | orchestrator | 2025-11-01 14:15:19 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:15:19.912274 | orchestrator | 2025-11-01 14:15:19 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:15:19.914555 | orchestrator | 2025-11-01 14:15:19 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:15:19.914602 | orchestrator | 2025-11-01 14:15:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:15:22.957603 | orchestrator | 2025-11-01 14:15:22 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:15:22.958937 | orchestrator | 2025-11-01 14:15:22 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:15:22.960583 | orchestrator | 2025-11-01 14:15:22 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state STARTED 2025-11-01 14:15:22.960691 | orchestrator | 2025-11-01 14:15:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:15:26.012943 | orchestrator | 2025-11-01 14:15:26 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:15:26.013731 | orchestrator | 2025-11-01 14:15:26 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:15:26.015694 | orchestrator | 2025-11-01 14:15:26 | INFO  | Task 4514a574-6d53-47e9-9561-3384f5e656be is in state SUCCESS 2025-11-01 14:15:26.017747 | orchestrator | 2025-11-01 14:15:26.017782 | orchestrator | 2025-11-01 14:15:26.017795 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-11-01 14:15:26.017806 | orchestrator | 2025-11-01 14:15:26.017818 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-11-01 14:15:26.018220 | orchestrator | Saturday 01 November 2025 14:13:06 +0000 (0:00:00.689) 0:00:00.689 ***** 2025-11-01 14:15:26.018241 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:15:26.018253 | orchestrator | 2025-11-01 14:15:26.018264 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-11-01 14:15:26.018275 | orchestrator | Saturday 01 November 2025 14:13:06 +0000 (0:00:00.673) 0:00:01.363 ***** 2025-11-01 14:15:26.018286 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:15:26.018298 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:15:26.018331 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:15:26.018342 | orchestrator | 2025-11-01 14:15:26.018353 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-11-01 14:15:26.018363 | orchestrator | Saturday 01 November 2025 14:13:07 +0000 (0:00:00.621) 0:00:01.984 ***** 2025-11-01 14:15:26.018374 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:15:26.018412 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:15:26.018425 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:15:26.018436 | orchestrator | 2025-11-01 14:15:26.018447 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-11-01 14:15:26.018648 | orchestrator | Saturday 01 November 2025 14:13:07 +0000 (0:00:00.311) 0:00:02.296 ***** 2025-11-01 14:15:26.018661 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:15:26.018672 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:15:26.018682 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:15:26.018893 | orchestrator | 2025-11-01 14:15:26.018905 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-11-01 14:15:26.018916 | orchestrator | Saturday 01 November 2025 14:13:08 +0000 (0:00:00.902) 0:00:03.198 ***** 2025-11-01 14:15:26.018926 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:15:26.018938 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:15:26.018948 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:15:26.018959 | orchestrator | 2025-11-01 14:15:26.018970 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-11-01 14:15:26.018981 | orchestrator | Saturday 01 November 2025 14:13:08 +0000 (0:00:00.339) 0:00:03.537 ***** 2025-11-01 14:15:26.018991 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:15:26.019002 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:15:26.019012 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:15:26.019023 | orchestrator | 2025-11-01 14:15:26.019034 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-11-01 14:15:26.019045 | orchestrator | Saturday 01 November 2025 14:13:09 +0000 (0:00:00.343) 0:00:03.880 ***** 2025-11-01 14:15:26.019055 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:15:26.019066 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:15:26.019077 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:15:26.019087 | orchestrator | 2025-11-01 14:15:26.019098 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-11-01 14:15:26.019109 | orchestrator | Saturday 01 November 2025 14:13:09 +0000 (0:00:00.326) 0:00:04.207 ***** 2025-11-01 14:15:26.019120 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:15:26.019132 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:15:26.019143 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:15:26.019153 | orchestrator | 2025-11-01 14:15:26.019164 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-11-01 14:15:26.019175 | orchestrator | Saturday 01 November 2025 14:13:10 +0000 (0:00:00.542) 0:00:04.749 ***** 2025-11-01 14:15:26.019185 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:15:26.019196 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:15:26.019207 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:15:26.019217 | orchestrator | 2025-11-01 14:15:26.019228 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-11-01 14:15:26.019239 | orchestrator | Saturday 01 November 2025 14:13:10 +0000 (0:00:00.349) 0:00:05.098 ***** 2025-11-01 14:15:26.019259 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-01 14:15:26.019271 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-01 14:15:26.019282 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-01 14:15:26.019292 | orchestrator | 2025-11-01 14:15:26.019303 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-11-01 14:15:26.019314 | orchestrator | Saturday 01 November 2025 14:13:11 +0000 (0:00:00.668) 0:00:05.767 ***** 2025-11-01 14:15:26.019324 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:15:26.019335 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:15:26.019346 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:15:26.019365 | orchestrator | 2025-11-01 14:15:26.019376 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-11-01 14:15:26.019386 | orchestrator | Saturday 01 November 2025 14:13:11 +0000 (0:00:00.481) 0:00:06.248 ***** 2025-11-01 14:15:26.019397 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-01 14:15:26.019408 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-01 14:15:26.019418 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-01 14:15:26.019429 | orchestrator | 2025-11-01 14:15:26.019440 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-11-01 14:15:26.019451 | orchestrator | Saturday 01 November 2025 14:13:13 +0000 (0:00:02.324) 0:00:08.573 ***** 2025-11-01 14:15:26.019462 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-11-01 14:15:26.019473 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-11-01 14:15:26.019483 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-11-01 14:15:26.019494 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:15:26.019505 | orchestrator | 2025-11-01 14:15:26.019519 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-11-01 14:15:26.019593 | orchestrator | Saturday 01 November 2025 14:13:14 +0000 (0:00:00.671) 0:00:09.244 ***** 2025-11-01 14:15:26.019610 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.019626 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.019640 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.019653 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:15:26.019665 | orchestrator | 2025-11-01 14:15:26.019678 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-11-01 14:15:26.019689 | orchestrator | Saturday 01 November 2025 14:13:15 +0000 (0:00:00.851) 0:00:10.095 ***** 2025-11-01 14:15:26.019704 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.019718 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.019732 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.019744 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:15:26.019757 | orchestrator | 2025-11-01 14:15:26.019778 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-11-01 14:15:26.019790 | orchestrator | Saturday 01 November 2025 14:13:15 +0000 (0:00:00.357) 0:00:10.452 ***** 2025-11-01 14:15:26.019810 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'de7733108cd5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-11-01 14:13:12.313023', 'end': '2025-11-01 14:13:12.368639', 'delta': '0:00:00.055616', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['de7733108cd5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-11-01 14:15:26.019827 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'cc092727f209', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-11-01 14:13:13.141718', 'end': '2025-11-01 14:13:13.195423', 'delta': '0:00:00.053705', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cc092727f209'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-11-01 14:15:26.019871 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '896120e1af93', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-11-01 14:13:13.736606', 'end': '2025-11-01 14:13:13.770968', 'delta': '0:00:00.034362', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['896120e1af93'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-11-01 14:15:26.019884 | orchestrator | 2025-11-01 14:15:26.019895 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-11-01 14:15:26.019906 | orchestrator | Saturday 01 November 2025 14:13:16 +0000 (0:00:00.235) 0:00:10.688 ***** 2025-11-01 14:15:26.019917 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:15:26.019927 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:15:26.019938 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:15:26.019949 | orchestrator | 2025-11-01 14:15:26.019960 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-11-01 14:15:26.019970 | orchestrator | Saturday 01 November 2025 14:13:16 +0000 (0:00:00.464) 0:00:11.153 ***** 2025-11-01 14:15:26.019981 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-11-01 14:15:26.019992 | orchestrator | 2025-11-01 14:15:26.020002 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-11-01 14:15:26.020013 | orchestrator | Saturday 01 November 2025 14:13:18 +0000 (0:00:01.909) 0:00:13.063 ***** 2025-11-01 14:15:26.020024 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:15:26.020035 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:15:26.020046 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:15:26.020056 | orchestrator | 2025-11-01 14:15:26.020067 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-11-01 14:15:26.020078 | orchestrator | Saturday 01 November 2025 14:13:18 +0000 (0:00:00.314) 0:00:13.377 ***** 2025-11-01 14:15:26.020088 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:15:26.020106 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:15:26.020117 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:15:26.020127 | orchestrator | 2025-11-01 14:15:26.020138 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-11-01 14:15:26.020149 | orchestrator | Saturday 01 November 2025 14:13:19 +0000 (0:00:00.404) 0:00:13.782 ***** 2025-11-01 14:15:26.020159 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:15:26.020170 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:15:26.020181 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:15:26.020191 | orchestrator | 2025-11-01 14:15:26.020202 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-11-01 14:15:26.020213 | orchestrator | Saturday 01 November 2025 14:13:19 +0000 (0:00:00.526) 0:00:14.309 ***** 2025-11-01 14:15:26.020224 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:15:26.020234 | orchestrator | 2025-11-01 14:15:26.020245 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-11-01 14:15:26.020256 | orchestrator | Saturday 01 November 2025 14:13:19 +0000 (0:00:00.146) 0:00:14.455 ***** 2025-11-01 14:15:26.020266 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:15:26.020277 | orchestrator | 2025-11-01 14:15:26.020288 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-11-01 14:15:26.020298 | orchestrator | Saturday 01 November 2025 14:13:20 +0000 (0:00:00.277) 0:00:14.732 ***** 2025-11-01 14:15:26.020309 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:15:26.020320 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:15:26.020330 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:15:26.020341 | orchestrator | 2025-11-01 14:15:26.020363 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-11-01 14:15:26.020374 | orchestrator | Saturday 01 November 2025 14:13:20 +0000 (0:00:00.324) 0:00:15.056 ***** 2025-11-01 14:15:26.020385 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:15:26.020395 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:15:26.020406 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:15:26.020417 | orchestrator | 2025-11-01 14:15:26.020427 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-11-01 14:15:26.020438 | orchestrator | Saturday 01 November 2025 14:13:20 +0000 (0:00:00.361) 0:00:15.418 ***** 2025-11-01 14:15:26.020449 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:15:26.020459 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:15:26.020470 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:15:26.020481 | orchestrator | 2025-11-01 14:15:26.020491 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-11-01 14:15:26.020502 | orchestrator | Saturday 01 November 2025 14:13:21 +0000 (0:00:00.575) 0:00:15.993 ***** 2025-11-01 14:15:26.020513 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:15:26.020569 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:15:26.020582 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:15:26.020592 | orchestrator | 2025-11-01 14:15:26.020603 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-11-01 14:15:26.020614 | orchestrator | Saturday 01 November 2025 14:13:21 +0000 (0:00:00.382) 0:00:16.375 ***** 2025-11-01 14:15:26.020624 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:15:26.020635 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:15:26.020646 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:15:26.020656 | orchestrator | 2025-11-01 14:15:26.020667 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-11-01 14:15:26.020678 | orchestrator | Saturday 01 November 2025 14:13:22 +0000 (0:00:00.349) 0:00:16.725 ***** 2025-11-01 14:15:26.020689 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:15:26.020699 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:15:26.020710 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:15:26.020720 | orchestrator | 2025-11-01 14:15:26.020731 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-11-01 14:15:26.020775 | orchestrator | Saturday 01 November 2025 14:13:22 +0000 (0:00:00.355) 0:00:17.080 ***** 2025-11-01 14:15:26.020794 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:15:26.020804 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:15:26.020815 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:15:26.020825 | orchestrator | 2025-11-01 14:15:26.020836 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-11-01 14:15:26.020847 | orchestrator | Saturday 01 November 2025 14:13:23 +0000 (0:00:00.595) 0:00:17.676 ***** 2025-11-01 14:15:26.020859 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--47edfe94--e799--500a--9f78--eae255c41273-osd--block--47edfe94--e799--500a--9f78--eae255c41273', 'dm-uuid-LVM-5iecivp83EVTPr28Zo82u3SmraqQlgMlOF259DuNwwbNlvXYyRFxWEqhT3Hwjj3T'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-01 14:15:26.020872 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--efff7302--70e8--5bbc--90af--2166d1a25777-osd--block--efff7302--70e8--5bbc--90af--2166d1a25777', 'dm-uuid-LVM-64JnOfFwPenpvQr3sa3Knbc6XItP1ImhCmNfxnZcc0pZgEPfDBxMy1CqiNPlMPAh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-01 14:15:26.020883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:15:26.020895 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:15:26.020912 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:15:26.020924 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:15:26.020935 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:15:26.020981 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:15:26.020995 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:15:26.021006 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:15:26.021027 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede', 'scsi-SQEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part1', 'scsi-SQEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part14', 'scsi-SQEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part15', 'scsi-SQEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part16', 'scsi-SQEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:15:26.021043 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--47edfe94--e799--500a--9f78--eae255c41273-osd--block--47edfe94--e799--500a--9f78--eae255c41273'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YpnRQB-jBP1-82g6-g8fd-LeRA-d7Tm-eXHHyS', 'scsi-0QEMU_QEMU_HARDDISK_4fee078c-1565-4ab1-bdda-b8bebdd42045', 'scsi-SQEMU_QEMU_HARDDISK_4fee078c-1565-4ab1-bdda-b8bebdd42045'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:15:26.021091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--efff7302--70e8--5bbc--90af--2166d1a25777-osd--block--efff7302--70e8--5bbc--90af--2166d1a25777'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iiuCsz-VrdB-dgiu-Kx5a-URcy-cBxF-vraN5N', 'scsi-0QEMU_QEMU_HARDDISK_c17a8236-4766-4598-abab-5d58d5ce65a6', 'scsi-SQEMU_QEMU_HARDDISK_c17a8236-4766-4598-abab-5d58d5ce65a6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:15:26.021105 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d89e604-ccfa-4ce6-abe5-76180138882d', 'scsi-SQEMU_QEMU_HARDDISK_7d89e604-ccfa-4ce6-abe5-76180138882d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:15:26.021117 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-13-18-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:15:26.021129 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bf0a4791--ac15--5066--8808--a0a6deeb0cc9-osd--block--bf0a4791--ac15--5066--8808--a0a6deeb0cc9', 'dm-uuid-LVM-hFELSVlkHF2T1dngUyA28Zszw7xESt5CX4RRltN9L1kY8z3IVItlbqJ5Spb0z1k6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-01 14:15:26.021146 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5630d3b4--f241--5aa8--9956--015e1822542e-osd--block--5630d3b4--f241--5aa8--9956--015e1822542e', 'dm-uuid-LVM-qVkuHQLPgmWWE2KI6ybDQxfNnLOCMMNUgFoysg5F3RFhAWIRm1IRmZHTEmyEA3hr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-01 14:15:26.021158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:15:26.021169 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:15:26.021215 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:15:26.021229 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:15:26.021240 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:15:26.021251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:15:26.021262 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:15:26.021273 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:15:26.021284 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:15:26.021329 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad', 'scsi-SQEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part1', 'scsi-SQEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part14', 'scsi-SQEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part15', 'scsi-SQEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part16', 'scsi-SQEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:15:26.021351 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--bf0a4791--ac15--5066--8808--a0a6deeb0cc9-osd--block--bf0a4791--ac15--5066--8808--a0a6deeb0cc9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lShadK-zkzo-yGlR-ygJ8-c3QC-QIx0-1fIdlx', 'scsi-0QEMU_QEMU_HARDDISK_08ca9d91-9929-4ba3-9cad-ed75b64a043e', 'scsi-SQEMU_QEMU_HARDDISK_08ca9d91-9929-4ba3-9cad-ed75b64a043e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:15:26.021363 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5630d3b4--f241--5aa8--9956--015e1822542e-osd--block--5630d3b4--f241--5aa8--9956--015e1822542e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-75UM0C-uJ1p-OTB6-fYTA-kSPP-fCvT-SJk04U', 'scsi-0QEMU_QEMU_HARDDISK_072d7475-b9a0-4b66-89cc-e4fcf46016ff', 'scsi-SQEMU_QEMU_HARDDISK_072d7475-b9a0-4b66-89cc-e4fcf46016ff'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:15:26.021375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5b7cda2-7cd1-4139-8c09-f2864ed6115a', 'scsi-SQEMU_QEMU_HARDDISK_d5b7cda2-7cd1-4139-8c09-f2864ed6115a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:15:26.021391 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-13-18-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:15:26.021403 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:15:26.021421 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8ee830d1--3d8f--5ecc--a4b4--c1bec6b9910f-osd--block--8ee830d1--3d8f--5ecc--a4b4--c1bec6b9910f', 'dm-uuid-LVM-fxhH0SkmSCqWU4Wy7dw5tLQClhfedOljDMkZCymSSYMmhuftj12v8Tpyva9L0mc8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-01 14:15:26.021438 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7e540012--4fa7--591e--a498--149cbb5b09d9-osd--block--7e540012--4fa7--591e--a498--149cbb5b09d9', 'dm-uuid-LVM-eZCOPRchOkTotQpPgPuFhEXd8dSlq2Gd5ddJWqEgSxlUkQ9NQ1bOeX0PsU7Z3aN0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-01 14:15:26.021449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:15:26.021461 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:15:26.021472 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:15:26.021483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:15:26.021494 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:15:26.021505 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:15:26.021521 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:15:26.021560 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 14:15:26.021583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3', 'scsi-SQEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part1', 'scsi-SQEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part14', 'scsi-SQEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part15', 'scsi-SQEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part16', 'scsi-SQEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:15:26.021596 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8ee830d1--3d8f--5ecc--a4b4--c1bec6b9910f-osd--block--8ee830d1--3d8f--5ecc--a4b4--c1bec6b9910f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kj0JoU-vxqx-oB5o-AIwD-oewZ-c72h-zh64Ec', 'scsi-0QEMU_QEMU_HARDDISK_dbba508b-4e10-452f-8431-011284f42e7d', 'scsi-SQEMU_QEMU_HARDDISK_dbba508b-4e10-452f-8431-011284f42e7d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:15:26.021612 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--7e540012--4fa7--591e--a498--149cbb5b09d9-osd--block--7e540012--4fa7--591e--a498--149cbb5b09d9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-chllP0-9PgN-Y42z-FzFx-ub4p-LQKx-I4sZ4l', 'scsi-0QEMU_QEMU_HARDDISK_f57a5620-543a-43ae-a22d-8a42cad6fb24', 'scsi-SQEMU_QEMU_HARDDISK_f57a5620-543a-43ae-a22d-8a42cad6fb24'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:15:26.021634 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c347dc72-435c-43d5-a9cf-2c60f1de142e', 'scsi-SQEMU_QEMU_HARDDISK_c347dc72-435c-43d5-a9cf-2c60f1de142e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:15:26.021652 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-13-18-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 14:15:26.021663 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:15:26.021674 | orchestrator | 2025-11-01 14:15:26.021685 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-11-01 14:15:26.021696 | orchestrator | Saturday 01 November 2025 14:13:23 +0000 (0:00:00.744) 0:00:18.420 ***** 2025-11-01 14:15:26.021708 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--47edfe94--e799--500a--9f78--eae255c41273-osd--block--47edfe94--e799--500a--9f78--eae255c41273', 'dm-uuid-LVM-5iecivp83EVTPr28Zo82u3SmraqQlgMlOF259DuNwwbNlvXYyRFxWEqhT3Hwjj3T'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.021720 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--efff7302--70e8--5bbc--90af--2166d1a25777-osd--block--efff7302--70e8--5bbc--90af--2166d1a25777', 'dm-uuid-LVM-64JnOfFwPenpvQr3sa3Knbc6XItP1ImhCmNfxnZcc0pZgEPfDBxMy1CqiNPlMPAh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.021731 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.021754 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bf0a4791--ac15--5066--8808--a0a6deeb0cc9-osd--block--bf0a4791--ac15--5066--8808--a0a6deeb0cc9', 'dm-uuid-LVM-hFELSVlkHF2T1dngUyA28Zszw7xESt5CX4RRltN9L1kY8z3IVItlbqJ5Spb0z1k6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.021766 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.021783 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5630d3b4--f241--5aa8--9956--015e1822542e-osd--block--5630d3b4--f241--5aa8--9956--015e1822542e', 'dm-uuid-LVM-qVkuHQLPgmWWE2KI6ybDQxfNnLOCMMNUgFoysg5F3RFhAWIRm1IRmZHTEmyEA3hr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.021795 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.021806 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.021817 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.021839 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.021851 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.021870 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.021882 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.021893 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.021904 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.021927 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.021939 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.021950 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.021971 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad', 'scsi-SQEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part1', 'scsi-SQEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part14', 'scsi-SQEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part15', 'scsi-SQEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part16', 'scsi-SQEMU_QEMU_HARDDISK_fd211c63-6e70-45f0-80e9-be44e116b0ad-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.021995 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--bf0a4791--ac15--5066--8808--a0a6deeb0cc9-osd--block--bf0a4791--ac15--5066--8808--a0a6deeb0cc9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lShadK-zkzo-yGlR-ygJ8-c3QC-QIx0-1fIdlx', 'scsi-0QEMU_QEMU_HARDDISK_08ca9d91-9929-4ba3-9cad-ed75b64a043e', 'scsi-SQEMU_QEMU_HARDDISK_08ca9d91-9929-4ba3-9cad-ed75b64a043e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.022008 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.022055 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5630d3b4--f241--5aa8--9956--015e1822542e-osd--block--5630d3b4--f241--5aa8--9956--015e1822542e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-75UM0C-uJ1p-OTB6-fYTA-kSPP-fCvT-SJk04U', 'scsi-0QEMU_QEMU_HARDDISK_072d7475-b9a0-4b66-89cc-e4fcf46016ff', 'scsi-SQEMU_QEMU_HARDDISK_072d7475-b9a0-4b66-89cc-e4fcf46016ff'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.022070 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5b7cda2-7cd1-4139-8c09-f2864ed6115a', 'scsi-SQEMU_QEMU_HARDDISK_d5b7cda2-7cd1-4139-8c09-f2864ed6115a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.022082 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-13-18-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.022105 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.022116 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:15:26.022128 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8ee830d1--3d8f--5ecc--a4b4--c1bec6b9910f-osd--block--8ee830d1--3d8f--5ecc--a4b4--c1bec6b9910f', 'dm-uuid-LVM-fxhH0SkmSCqWU4Wy7dw5tLQClhfedOljDMkZCymSSYMmhuftj12v8Tpyva9L0mc8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.022149 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede', 'scsi-SQEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part1', 'scsi-SQEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part14', 'scsi-SQEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part15', 'scsi-SQEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part16', 'scsi-SQEMU_QEMU_HARDDISK_9fbd7e64-ce07-4fda-ab82-c32e390fbede-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.022168 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7e540012--4fa7--591e--a498--149cbb5b09d9-osd--block--7e540012--4fa7--591e--a498--149cbb5b09d9', 'dm-uuid-LVM-eZCOPRchOkTotQpPgPuFhEXd8dSlq2Gd5ddJWqEgSxlUkQ9NQ1bOeX0PsU7Z3aN0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.022184 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--47edfe94--e799--500a--9f78--eae255c41273-osd--block--47edfe94--e799--500a--9f78--eae255c41273'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YpnRQB-jBP1-82g6-g8fd-LeRA-d7Tm-eXHHyS', 'scsi-0QEMU_QEMU_HARDDISK_4fee078c-1565-4ab1-bdda-b8bebdd42045', 'scsi-SQEMU_QEMU_HARDDISK_4fee078c-1565-4ab1-bdda-b8bebdd42045'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.022196 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.022213 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.022225 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--efff7302--70e8--5bbc--90af--2166d1a25777-osd--block--efff7302--70e8--5bbc--90af--2166d1a25777'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iiuCsz-VrdB-dgiu-Kx5a-URcy-cBxF-vraN5N', 'scsi-0QEMU_QEMU_HARDDISK_c17a8236-4766-4598-abab-5d58d5ce65a6', 'scsi-SQEMU_QEMU_HARDDISK_c17a8236-4766-4598-abab-5d58d5ce65a6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.022243 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.022260 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d89e604-ccfa-4ce6-abe5-76180138882d', 'scsi-SQEMU_QEMU_HARDDISK_7d89e604-ccfa-4ce6-abe5-76180138882d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.022272 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.022289 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-13-18-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.022300 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.022312 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:15:26.022323 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.022343 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.022360 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.022380 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3', 'scsi-SQEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part1', 'scsi-SQEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part14', 'scsi-SQEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part15', 'scsi-SQEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part16', 'scsi-SQEMU_QEMU_HARDDISK_05ca2b4f-b6fa-412a-995c-f659adce7ca3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.022392 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8ee830d1--3d8f--5ecc--a4b4--c1bec6b9910f-osd--block--8ee830d1--3d8f--5ecc--a4b4--c1bec6b9910f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kj0JoU-vxqx-oB5o-AIwD-oewZ-c72h-zh64Ec', 'scsi-0QEMU_QEMU_HARDDISK_dbba508b-4e10-452f-8431-011284f42e7d', 'scsi-SQEMU_QEMU_HARDDISK_dbba508b-4e10-452f-8431-011284f42e7d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.022415 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--7e540012--4fa7--591e--a498--149cbb5b09d9-osd--block--7e540012--4fa7--591e--a498--149cbb5b09d9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-chllP0-9PgN-Y42z-FzFx-ub4p-LQKx-I4sZ4l', 'scsi-0QEMU_QEMU_HARDDISK_f57a5620-543a-43ae-a22d-8a42cad6fb24', 'scsi-SQEMU_QEMU_HARDDISK_f57a5620-543a-43ae-a22d-8a42cad6fb24'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.022427 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c347dc72-435c-43d5-a9cf-2c60f1de142e', 'scsi-SQEMU_QEMU_HARDDISK_c347dc72-435c-43d5-a9cf-2c60f1de142e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.022446 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-13-18-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 14:15:26.022458 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:15:26.022469 | orchestrator | 2025-11-01 14:15:26.022480 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-11-01 14:15:26.022490 | orchestrator | Saturday 01 November 2025 14:13:24 +0000 (0:00:00.898) 0:00:19.319 ***** 2025-11-01 14:15:26.022501 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:15:26.022512 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:15:26.022573 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:15:26.022586 | orchestrator | 2025-11-01 14:15:26.022597 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-11-01 14:15:26.022608 | orchestrator | Saturday 01 November 2025 14:13:25 +0000 (0:00:00.721) 0:00:20.041 ***** 2025-11-01 14:15:26.022626 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:15:26.022637 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:15:26.022648 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:15:26.022658 | orchestrator | 2025-11-01 14:15:26.022669 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-11-01 14:15:26.022680 | orchestrator | Saturday 01 November 2025 14:13:25 +0000 (0:00:00.550) 0:00:20.591 ***** 2025-11-01 14:15:26.022690 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:15:26.022701 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:15:26.022712 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:15:26.022722 | orchestrator | 2025-11-01 14:15:26.022733 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-11-01 14:15:26.022744 | orchestrator | Saturday 01 November 2025 14:13:26 +0000 (0:00:00.689) 0:00:21.281 ***** 2025-11-01 14:15:26.022754 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:15:26.022765 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:15:26.022776 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:15:26.022787 | orchestrator | 2025-11-01 14:15:26.022797 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-11-01 14:15:26.022808 | orchestrator | Saturday 01 November 2025 14:13:26 +0000 (0:00:00.327) 0:00:21.608 ***** 2025-11-01 14:15:26.022818 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:15:26.022829 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:15:26.022840 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:15:26.022850 | orchestrator | 2025-11-01 14:15:26.022861 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-11-01 14:15:26.022872 | orchestrator | Saturday 01 November 2025 14:13:27 +0000 (0:00:00.455) 0:00:22.063 ***** 2025-11-01 14:15:26.022882 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:15:26.022893 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:15:26.022903 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:15:26.022914 | orchestrator | 2025-11-01 14:15:26.022925 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-11-01 14:15:26.022935 | orchestrator | Saturday 01 November 2025 14:13:28 +0000 (0:00:00.607) 0:00:22.671 ***** 2025-11-01 14:15:26.022944 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-11-01 14:15:26.022954 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-11-01 14:15:26.022963 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-11-01 14:15:26.022973 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-11-01 14:15:26.022982 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-11-01 14:15:26.022992 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-11-01 14:15:26.023001 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-11-01 14:15:26.023011 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-11-01 14:15:26.023020 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-11-01 14:15:26.023029 | orchestrator | 2025-11-01 14:15:26.023039 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-11-01 14:15:26.023049 | orchestrator | Saturday 01 November 2025 14:13:28 +0000 (0:00:00.913) 0:00:23.584 ***** 2025-11-01 14:15:26.023058 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-11-01 14:15:26.023068 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-11-01 14:15:26.023077 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-11-01 14:15:26.023087 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:15:26.023096 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-11-01 14:15:26.023106 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-11-01 14:15:26.023115 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-11-01 14:15:26.023124 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:15:26.023134 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-11-01 14:15:26.023143 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-11-01 14:15:26.023159 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-11-01 14:15:26.023168 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:15:26.023178 | orchestrator | 2025-11-01 14:15:26.023187 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-11-01 14:15:26.023197 | orchestrator | Saturday 01 November 2025 14:13:29 +0000 (0:00:00.462) 0:00:24.047 ***** 2025-11-01 14:15:26.023206 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:15:26.023216 | orchestrator | 2025-11-01 14:15:26.023226 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-11-01 14:15:26.023235 | orchestrator | Saturday 01 November 2025 14:13:30 +0000 (0:00:00.780) 0:00:24.827 ***** 2025-11-01 14:15:26.023245 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:15:26.023254 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:15:26.023264 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:15:26.023273 | orchestrator | 2025-11-01 14:15:26.023288 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-11-01 14:15:26.023298 | orchestrator | Saturday 01 November 2025 14:13:30 +0000 (0:00:00.353) 0:00:25.181 ***** 2025-11-01 14:15:26.023308 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:15:26.023317 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:15:26.023326 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:15:26.023336 | orchestrator | 2025-11-01 14:15:26.023345 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-11-01 14:15:26.023355 | orchestrator | Saturday 01 November 2025 14:13:30 +0000 (0:00:00.343) 0:00:25.524 ***** 2025-11-01 14:15:26.023364 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:15:26.023374 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:15:26.023383 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:15:26.023393 | orchestrator | 2025-11-01 14:15:26.023402 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-11-01 14:15:26.023412 | orchestrator | Saturday 01 November 2025 14:13:31 +0000 (0:00:00.342) 0:00:25.866 ***** 2025-11-01 14:15:26.023421 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:15:26.023431 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:15:26.023440 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:15:26.023449 | orchestrator | 2025-11-01 14:15:26.023459 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-11-01 14:15:26.023468 | orchestrator | Saturday 01 November 2025 14:13:32 +0000 (0:00:01.010) 0:00:26.877 ***** 2025-11-01 14:15:26.023478 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 14:15:26.023487 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 14:15:26.023497 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 14:15:26.023506 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:15:26.023516 | orchestrator | 2025-11-01 14:15:26.023540 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-11-01 14:15:26.023550 | orchestrator | Saturday 01 November 2025 14:13:32 +0000 (0:00:00.423) 0:00:27.301 ***** 2025-11-01 14:15:26.023560 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 14:15:26.023569 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 14:15:26.023579 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 14:15:26.023588 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:15:26.023597 | orchestrator | 2025-11-01 14:15:26.023607 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-11-01 14:15:26.023616 | orchestrator | Saturday 01 November 2025 14:13:33 +0000 (0:00:00.430) 0:00:27.731 ***** 2025-11-01 14:15:26.023626 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 14:15:26.023635 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 14:15:26.023645 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 14:15:26.023663 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:15:26.023673 | orchestrator | 2025-11-01 14:15:26.023682 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-11-01 14:15:26.023692 | orchestrator | Saturday 01 November 2025 14:13:33 +0000 (0:00:00.419) 0:00:28.151 ***** 2025-11-01 14:15:26.023701 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:15:26.023711 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:15:26.023720 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:15:26.023730 | orchestrator | 2025-11-01 14:15:26.023739 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-11-01 14:15:26.023749 | orchestrator | Saturday 01 November 2025 14:13:33 +0000 (0:00:00.353) 0:00:28.504 ***** 2025-11-01 14:15:26.023782 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-11-01 14:15:26.023793 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-11-01 14:15:26.023802 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-11-01 14:15:26.023812 | orchestrator | 2025-11-01 14:15:26.023846 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-11-01 14:15:26.023863 | orchestrator | Saturday 01 November 2025 14:13:34 +0000 (0:00:00.658) 0:00:29.163 ***** 2025-11-01 14:15:26.023873 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-01 14:15:26.023882 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-01 14:15:26.023892 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-01 14:15:26.023901 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-11-01 14:15:26.023911 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-11-01 14:15:26.023920 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-11-01 14:15:26.023930 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-11-01 14:15:26.023939 | orchestrator | 2025-11-01 14:15:26.023949 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-11-01 14:15:26.023958 | orchestrator | Saturday 01 November 2025 14:13:35 +0000 (0:00:01.074) 0:00:30.237 ***** 2025-11-01 14:15:26.023968 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-01 14:15:26.023977 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-01 14:15:26.023987 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-01 14:15:26.023996 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-11-01 14:15:26.024006 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-11-01 14:15:26.024015 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-11-01 14:15:26.024025 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-11-01 14:15:26.024034 | orchestrator | 2025-11-01 14:15:26.024049 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-11-01 14:15:26.024059 | orchestrator | Saturday 01 November 2025 14:13:37 +0000 (0:00:02.204) 0:00:32.442 ***** 2025-11-01 14:15:26.024069 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:15:26.024078 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:15:26.024088 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-11-01 14:15:26.024097 | orchestrator | 2025-11-01 14:15:26.024107 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-11-01 14:15:26.024116 | orchestrator | Saturday 01 November 2025 14:13:38 +0000 (0:00:00.424) 0:00:32.866 ***** 2025-11-01 14:15:26.024127 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-01 14:15:26.024145 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-01 14:15:26.024156 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-01 14:15:26.024166 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-01 14:15:26.024176 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-01 14:15:26.024185 | orchestrator | 2025-11-01 14:15:26.024195 | orchestrator | TASK [generate keys] *********************************************************** 2025-11-01 14:15:26.024205 | orchestrator | Saturday 01 November 2025 14:14:25 +0000 (0:00:47.163) 0:01:20.029 ***** 2025-11-01 14:15:26.024214 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:15:26.024224 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:15:26.024233 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:15:26.024242 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:15:26.024252 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:15:26.024261 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:15:26.024275 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-11-01 14:15:26.024285 | orchestrator | 2025-11-01 14:15:26.024295 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-11-01 14:15:26.024304 | orchestrator | Saturday 01 November 2025 14:14:51 +0000 (0:00:26.103) 0:01:46.133 ***** 2025-11-01 14:15:26.024314 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:15:26.024323 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:15:26.024333 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:15:26.024342 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:15:26.024352 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:15:26.024361 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:15:26.024371 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-01 14:15:26.024380 | orchestrator | 2025-11-01 14:15:26.024390 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-11-01 14:15:26.024399 | orchestrator | Saturday 01 November 2025 14:15:04 +0000 (0:00:12.897) 0:01:59.030 ***** 2025-11-01 14:15:26.024409 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:15:26.024418 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-01 14:15:26.024428 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-01 14:15:26.024437 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:15:26.024452 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-01 14:15:26.024462 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-01 14:15:26.024477 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:15:26.024487 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-01 14:15:26.024497 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-01 14:15:26.024506 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:15:26.024516 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-01 14:15:26.024544 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-01 14:15:26.024554 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:15:26.024563 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-01 14:15:26.024573 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-01 14:15:26.024582 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 14:15:26.024592 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-01 14:15:26.024601 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-01 14:15:26.024611 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-11-01 14:15:26.024620 | orchestrator | 2025-11-01 14:15:26.024629 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:15:26.024639 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-11-01 14:15:26.024650 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-11-01 14:15:26.024660 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-11-01 14:15:26.024669 | orchestrator | 2025-11-01 14:15:26.024679 | orchestrator | 2025-11-01 14:15:26.024688 | orchestrator | 2025-11-01 14:15:26.024810 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:15:26.024822 | orchestrator | Saturday 01 November 2025 14:15:22 +0000 (0:00:18.580) 0:02:17.611 ***** 2025-11-01 14:15:26.024832 | orchestrator | =============================================================================== 2025-11-01 14:15:26.024842 | orchestrator | create openstack pool(s) ----------------------------------------------- 47.16s 2025-11-01 14:15:26.024851 | orchestrator | generate keys ---------------------------------------------------------- 26.10s 2025-11-01 14:15:26.024861 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.58s 2025-11-01 14:15:26.024871 | orchestrator | get keys from monitors ------------------------------------------------- 12.90s 2025-11-01 14:15:26.024880 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.32s 2025-11-01 14:15:26.024889 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.20s 2025-11-01 14:15:26.024899 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.91s 2025-11-01 14:15:26.024909 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.07s 2025-11-01 14:15:26.024918 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 1.01s 2025-11-01 14:15:26.024928 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.91s 2025-11-01 14:15:26.024943 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.90s 2025-11-01 14:15:26.024954 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.90s 2025-11-01 14:15:26.024971 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.85s 2025-11-01 14:15:26.024980 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.78s 2025-11-01 14:15:26.024990 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.74s 2025-11-01 14:15:26.024999 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.72s 2025-11-01 14:15:26.025009 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.69s 2025-11-01 14:15:26.025018 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.67s 2025-11-01 14:15:26.025028 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.67s 2025-11-01 14:15:26.025037 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.67s 2025-11-01 14:15:26.025047 | orchestrator | 2025-11-01 14:15:26 | INFO  | Task 027d4d49-2a6a-4f77-a12d-51cd0ffe7137 is in state STARTED 2025-11-01 14:15:26.025057 | orchestrator | 2025-11-01 14:15:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:15:29.068264 | orchestrator | 2025-11-01 14:15:29 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:15:29.069135 | orchestrator | 2025-11-01 14:15:29 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:15:29.069901 | orchestrator | 2025-11-01 14:15:29 | INFO  | Task 027d4d49-2a6a-4f77-a12d-51cd0ffe7137 is in state STARTED 2025-11-01 14:15:29.069997 | orchestrator | 2025-11-01 14:15:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:15:32.119724 | orchestrator | 2025-11-01 14:15:32 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:15:32.121916 | orchestrator | 2025-11-01 14:15:32 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:15:32.124040 | orchestrator | 2025-11-01 14:15:32 | INFO  | Task 027d4d49-2a6a-4f77-a12d-51cd0ffe7137 is in state STARTED 2025-11-01 14:15:32.124065 | orchestrator | 2025-11-01 14:15:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:15:35.166174 | orchestrator | 2025-11-01 14:15:35 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:15:35.168338 | orchestrator | 2025-11-01 14:15:35 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:15:35.170517 | orchestrator | 2025-11-01 14:15:35 | INFO  | Task 027d4d49-2a6a-4f77-a12d-51cd0ffe7137 is in state STARTED 2025-11-01 14:15:35.170584 | orchestrator | 2025-11-01 14:15:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:15:38.219137 | orchestrator | 2025-11-01 14:15:38 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:15:38.221802 | orchestrator | 2025-11-01 14:15:38 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:15:38.225630 | orchestrator | 2025-11-01 14:15:38 | INFO  | Task 027d4d49-2a6a-4f77-a12d-51cd0ffe7137 is in state STARTED 2025-11-01 14:15:38.225710 | orchestrator | 2025-11-01 14:15:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:15:41.285727 | orchestrator | 2025-11-01 14:15:41 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:15:41.288005 | orchestrator | 2025-11-01 14:15:41 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:15:41.289880 | orchestrator | 2025-11-01 14:15:41 | INFO  | Task 027d4d49-2a6a-4f77-a12d-51cd0ffe7137 is in state STARTED 2025-11-01 14:15:41.289904 | orchestrator | 2025-11-01 14:15:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:15:44.348319 | orchestrator | 2025-11-01 14:15:44 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:15:44.350239 | orchestrator | 2025-11-01 14:15:44 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:15:44.353054 | orchestrator | 2025-11-01 14:15:44 | INFO  | Task 027d4d49-2a6a-4f77-a12d-51cd0ffe7137 is in state STARTED 2025-11-01 14:15:44.353324 | orchestrator | 2025-11-01 14:15:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:15:47.408837 | orchestrator | 2025-11-01 14:15:47 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:15:47.410794 | orchestrator | 2025-11-01 14:15:47 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:15:47.413078 | orchestrator | 2025-11-01 14:15:47 | INFO  | Task 027d4d49-2a6a-4f77-a12d-51cd0ffe7137 is in state STARTED 2025-11-01 14:15:47.413301 | orchestrator | 2025-11-01 14:15:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:15:50.468734 | orchestrator | 2025-11-01 14:15:50 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:15:50.472459 | orchestrator | 2025-11-01 14:15:50 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:15:50.476478 | orchestrator | 2025-11-01 14:15:50 | INFO  | Task 027d4d49-2a6a-4f77-a12d-51cd0ffe7137 is in state STARTED 2025-11-01 14:15:50.476785 | orchestrator | 2025-11-01 14:15:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:15:53.517652 | orchestrator | 2025-11-01 14:15:53 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:15:53.518799 | orchestrator | 2025-11-01 14:15:53 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:15:53.521207 | orchestrator | 2025-11-01 14:15:53 | INFO  | Task 027d4d49-2a6a-4f77-a12d-51cd0ffe7137 is in state STARTED 2025-11-01 14:15:53.521303 | orchestrator | 2025-11-01 14:15:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:15:56.564322 | orchestrator | 2025-11-01 14:15:56 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state STARTED 2025-11-01 14:15:56.566717 | orchestrator | 2025-11-01 14:15:56 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:15:56.569725 | orchestrator | 2025-11-01 14:15:56 | INFO  | Task 027d4d49-2a6a-4f77-a12d-51cd0ffe7137 is in state STARTED 2025-11-01 14:15:56.569970 | orchestrator | 2025-11-01 14:15:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:15:59.632401 | orchestrator | 2025-11-01 14:15:59 | INFO  | Task f9288e9e-09af-41a3-8640-866ab8f19e93 is in state SUCCESS 2025-11-01 14:15:59.633645 | orchestrator | 2025-11-01 14:15:59.633686 | orchestrator | 2025-11-01 14:15:59.633699 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 14:15:59.633710 | orchestrator | 2025-11-01 14:15:59.633722 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 14:15:59.633733 | orchestrator | Saturday 01 November 2025 14:14:05 +0000 (0:00:00.279) 0:00:00.279 ***** 2025-11-01 14:15:59.633744 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:15:59.633756 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:15:59.633767 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:15:59.633778 | orchestrator | 2025-11-01 14:15:59.633789 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 14:15:59.634174 | orchestrator | Saturday 01 November 2025 14:14:05 +0000 (0:00:00.315) 0:00:00.594 ***** 2025-11-01 14:15:59.634192 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-11-01 14:15:59.634204 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-11-01 14:15:59.634215 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-11-01 14:15:59.634250 | orchestrator | 2025-11-01 14:15:59.634262 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-11-01 14:15:59.634273 | orchestrator | 2025-11-01 14:15:59.634284 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-11-01 14:15:59.634294 | orchestrator | Saturday 01 November 2025 14:14:05 +0000 (0:00:00.468) 0:00:01.063 ***** 2025-11-01 14:15:59.634306 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:15:59.634318 | orchestrator | 2025-11-01 14:15:59.634329 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-11-01 14:15:59.634340 | orchestrator | Saturday 01 November 2025 14:14:06 +0000 (0:00:00.525) 0:00:01.588 ***** 2025-11-01 14:15:59.634370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-01 14:15:59.634404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-01 14:15:59.634435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-01 14:15:59.634448 | orchestrator | 2025-11-01 14:15:59.634460 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-11-01 14:15:59.634471 | orchestrator | Saturday 01 November 2025 14:14:07 +0000 (0:00:01.173) 0:00:02.762 ***** 2025-11-01 14:15:59.634482 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:15:59.634493 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:15:59.634504 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:15:59.634514 | orchestrator | 2025-11-01 14:15:59.634525 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-11-01 14:15:59.634564 | orchestrator | Saturday 01 November 2025 14:14:08 +0000 (0:00:00.524) 0:00:03.286 ***** 2025-11-01 14:15:59.634575 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-11-01 14:15:59.634595 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-11-01 14:15:59.634613 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-11-01 14:15:59.634624 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-11-01 14:15:59.634635 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-11-01 14:15:59.634646 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-11-01 14:15:59.634657 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-11-01 14:15:59.634667 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-11-01 14:15:59.634678 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-11-01 14:15:59.634688 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-11-01 14:15:59.634702 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-11-01 14:15:59.634719 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-11-01 14:15:59.634737 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-11-01 14:15:59.634755 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-11-01 14:15:59.634772 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-11-01 14:15:59.634790 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-11-01 14:15:59.634808 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-11-01 14:15:59.634826 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-11-01 14:15:59.634843 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-11-01 14:15:59.634860 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-11-01 14:15:59.634879 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-11-01 14:15:59.634897 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-11-01 14:15:59.634916 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-11-01 14:15:59.634936 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-11-01 14:15:59.634956 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-11-01 14:15:59.634978 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-11-01 14:15:59.634998 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-11-01 14:15:59.635025 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-11-01 14:15:59.635044 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-11-01 14:15:59.635063 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-11-01 14:15:59.635082 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-11-01 14:15:59.635101 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-11-01 14:15:59.635134 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-11-01 14:15:59.635148 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-11-01 14:15:59.635158 | orchestrator | 2025-11-01 14:15:59.635169 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-01 14:15:59.635180 | orchestrator | Saturday 01 November 2025 14:14:08 +0000 (0:00:00.790) 0:00:04.076 ***** 2025-11-01 14:15:59.635191 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:15:59.635202 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:15:59.635212 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:15:59.635223 | orchestrator | 2025-11-01 14:15:59.635233 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-01 14:15:59.635244 | orchestrator | Saturday 01 November 2025 14:14:09 +0000 (0:00:00.301) 0:00:04.378 ***** 2025-11-01 14:15:59.635254 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:15:59.635265 | orchestrator | 2025-11-01 14:15:59.635284 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-01 14:15:59.635296 | orchestrator | Saturday 01 November 2025 14:14:09 +0000 (0:00:00.138) 0:00:04.516 ***** 2025-11-01 14:15:59.635306 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:15:59.635317 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:15:59.635327 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:15:59.635337 | orchestrator | 2025-11-01 14:15:59.635348 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-01 14:15:59.635358 | orchestrator | Saturday 01 November 2025 14:14:09 +0000 (0:00:00.502) 0:00:05.019 ***** 2025-11-01 14:15:59.635369 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:15:59.635379 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:15:59.635390 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:15:59.635400 | orchestrator | 2025-11-01 14:15:59.635411 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-01 14:15:59.635422 | orchestrator | Saturday 01 November 2025 14:14:10 +0000 (0:00:00.316) 0:00:05.335 ***** 2025-11-01 14:15:59.635432 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:15:59.635443 | orchestrator | 2025-11-01 14:15:59.635453 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-01 14:15:59.635464 | orchestrator | Saturday 01 November 2025 14:14:10 +0000 (0:00:00.177) 0:00:05.513 ***** 2025-11-01 14:15:59.635475 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:15:59.635485 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:15:59.635496 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:15:59.635506 | orchestrator | 2025-11-01 14:15:59.635516 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-01 14:15:59.635560 | orchestrator | Saturday 01 November 2025 14:14:10 +0000 (0:00:00.337) 0:00:05.851 ***** 2025-11-01 14:15:59.635571 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:15:59.635582 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:15:59.635593 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:15:59.635603 | orchestrator | 2025-11-01 14:15:59.635614 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-01 14:15:59.635624 | orchestrator | Saturday 01 November 2025 14:14:10 +0000 (0:00:00.311) 0:00:06.163 ***** 2025-11-01 14:15:59.635635 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:15:59.635646 | orchestrator | 2025-11-01 14:15:59.635656 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-01 14:15:59.635667 | orchestrator | Saturday 01 November 2025 14:14:11 +0000 (0:00:00.342) 0:00:06.505 ***** 2025-11-01 14:15:59.635677 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:15:59.635688 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:15:59.635699 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:15:59.635709 | orchestrator | 2025-11-01 14:15:59.635720 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-01 14:15:59.635739 | orchestrator | Saturday 01 November 2025 14:14:11 +0000 (0:00:00.354) 0:00:06.860 ***** 2025-11-01 14:15:59.635906 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:15:59.635919 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:15:59.635930 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:15:59.635941 | orchestrator | 2025-11-01 14:15:59.635952 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-01 14:15:59.635963 | orchestrator | Saturday 01 November 2025 14:14:12 +0000 (0:00:00.358) 0:00:07.219 ***** 2025-11-01 14:15:59.635974 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:15:59.635984 | orchestrator | 2025-11-01 14:15:59.635995 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-01 14:15:59.636006 | orchestrator | Saturday 01 November 2025 14:14:12 +0000 (0:00:00.147) 0:00:07.366 ***** 2025-11-01 14:15:59.636017 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:15:59.636027 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:15:59.636038 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:15:59.636049 | orchestrator | 2025-11-01 14:15:59.636066 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-01 14:15:59.636077 | orchestrator | Saturday 01 November 2025 14:14:12 +0000 (0:00:00.310) 0:00:07.677 ***** 2025-11-01 14:15:59.636088 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:15:59.636099 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:15:59.636110 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:15:59.636121 | orchestrator | 2025-11-01 14:15:59.636131 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-01 14:15:59.636142 | orchestrator | Saturday 01 November 2025 14:14:13 +0000 (0:00:00.577) 0:00:08.255 ***** 2025-11-01 14:15:59.636153 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:15:59.636164 | orchestrator | 2025-11-01 14:15:59.636174 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-01 14:15:59.636185 | orchestrator | Saturday 01 November 2025 14:14:13 +0000 (0:00:00.126) 0:00:08.381 ***** 2025-11-01 14:15:59.636196 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:15:59.636207 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:15:59.636217 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:15:59.636228 | orchestrator | 2025-11-01 14:15:59.636239 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-01 14:15:59.636249 | orchestrator | Saturday 01 November 2025 14:14:13 +0000 (0:00:00.331) 0:00:08.713 ***** 2025-11-01 14:15:59.636260 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:15:59.636271 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:15:59.636282 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:15:59.636292 | orchestrator | 2025-11-01 14:15:59.636303 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-01 14:15:59.636314 | orchestrator | Saturday 01 November 2025 14:14:13 +0000 (0:00:00.365) 0:00:09.078 ***** 2025-11-01 14:15:59.636324 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:15:59.636335 | orchestrator | 2025-11-01 14:15:59.636346 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-01 14:15:59.636357 | orchestrator | Saturday 01 November 2025 14:14:14 +0000 (0:00:00.137) 0:00:09.216 ***** 2025-11-01 14:15:59.636367 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:15:59.636378 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:15:59.636389 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:15:59.636400 | orchestrator | 2025-11-01 14:15:59.636410 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-01 14:15:59.636429 | orchestrator | Saturday 01 November 2025 14:14:14 +0000 (0:00:00.300) 0:00:09.516 ***** 2025-11-01 14:15:59.636440 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:15:59.636451 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:15:59.636462 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:15:59.636472 | orchestrator | 2025-11-01 14:15:59.636483 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-01 14:15:59.636502 | orchestrator | Saturday 01 November 2025 14:14:15 +0000 (0:00:00.658) 0:00:10.174 ***** 2025-11-01 14:15:59.636513 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:15:59.636524 | orchestrator | 2025-11-01 14:15:59.636586 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-01 14:15:59.636599 | orchestrator | Saturday 01 November 2025 14:14:15 +0000 (0:00:00.135) 0:00:10.310 ***** 2025-11-01 14:15:59.636611 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:15:59.636622 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:15:59.636634 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:15:59.636646 | orchestrator | 2025-11-01 14:15:59.636658 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-01 14:15:59.636670 | orchestrator | Saturday 01 November 2025 14:14:15 +0000 (0:00:00.417) 0:00:10.727 ***** 2025-11-01 14:15:59.636681 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:15:59.636693 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:15:59.636705 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:15:59.636717 | orchestrator | 2025-11-01 14:15:59.636729 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-01 14:15:59.636740 | orchestrator | Saturday 01 November 2025 14:14:15 +0000 (0:00:00.371) 0:00:11.099 ***** 2025-11-01 14:15:59.636752 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:15:59.636763 | orchestrator | 2025-11-01 14:15:59.636775 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-01 14:15:59.636787 | orchestrator | Saturday 01 November 2025 14:14:16 +0000 (0:00:00.145) 0:00:11.244 ***** 2025-11-01 14:15:59.636799 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:15:59.636811 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:15:59.636822 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:15:59.636834 | orchestrator | 2025-11-01 14:15:59.636846 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-01 14:15:59.636858 | orchestrator | Saturday 01 November 2025 14:14:16 +0000 (0:00:00.307) 0:00:11.551 ***** 2025-11-01 14:15:59.636870 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:15:59.636883 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:15:59.636894 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:15:59.636905 | orchestrator | 2025-11-01 14:15:59.636916 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-01 14:15:59.636927 | orchestrator | Saturday 01 November 2025 14:14:17 +0000 (0:00:00.662) 0:00:12.214 ***** 2025-11-01 14:15:59.636938 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:15:59.636948 | orchestrator | 2025-11-01 14:15:59.636959 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-01 14:15:59.636970 | orchestrator | Saturday 01 November 2025 14:14:17 +0000 (0:00:00.136) 0:00:12.350 ***** 2025-11-01 14:15:59.636980 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:15:59.636991 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:15:59.637002 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:15:59.637012 | orchestrator | 2025-11-01 14:15:59.637023 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-01 14:15:59.637033 | orchestrator | Saturday 01 November 2025 14:14:17 +0000 (0:00:00.327) 0:00:12.678 ***** 2025-11-01 14:15:59.637043 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:15:59.637052 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:15:59.637062 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:15:59.637071 | orchestrator | 2025-11-01 14:15:59.637081 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-01 14:15:59.637091 | orchestrator | Saturday 01 November 2025 14:14:17 +0000 (0:00:00.390) 0:00:13.069 ***** 2025-11-01 14:15:59.637112 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:15:59.637122 | orchestrator | 2025-11-01 14:15:59.637132 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-01 14:15:59.637141 | orchestrator | Saturday 01 November 2025 14:14:18 +0000 (0:00:00.165) 0:00:13.234 ***** 2025-11-01 14:15:59.637151 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:15:59.637167 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:15:59.637176 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:15:59.637186 | orchestrator | 2025-11-01 14:15:59.637196 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-11-01 14:15:59.637205 | orchestrator | Saturday 01 November 2025 14:14:18 +0000 (0:00:00.558) 0:00:13.792 ***** 2025-11-01 14:15:59.637215 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:15:59.637224 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:15:59.637234 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:15:59.637243 | orchestrator | 2025-11-01 14:15:59.637253 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-11-01 14:15:59.637263 | orchestrator | Saturday 01 November 2025 14:14:20 +0000 (0:00:01.809) 0:00:15.601 ***** 2025-11-01 14:15:59.637272 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-11-01 14:15:59.637282 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-11-01 14:15:59.637291 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-11-01 14:15:59.637301 | orchestrator | 2025-11-01 14:15:59.637311 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-11-01 14:15:59.637320 | orchestrator | Saturday 01 November 2025 14:14:22 +0000 (0:00:02.027) 0:00:17.629 ***** 2025-11-01 14:15:59.637330 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-11-01 14:15:59.637340 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-11-01 14:15:59.637350 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-11-01 14:15:59.637359 | orchestrator | 2025-11-01 14:15:59.637369 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-11-01 14:15:59.637384 | orchestrator | Saturday 01 November 2025 14:14:24 +0000 (0:00:02.527) 0:00:20.157 ***** 2025-11-01 14:15:59.637394 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-11-01 14:15:59.637404 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-11-01 14:15:59.637414 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-11-01 14:15:59.637423 | orchestrator | 2025-11-01 14:15:59.637433 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-11-01 14:15:59.637442 | orchestrator | Saturday 01 November 2025 14:14:27 +0000 (0:00:02.283) 0:00:22.440 ***** 2025-11-01 14:15:59.637452 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:15:59.637461 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:15:59.637471 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:15:59.637480 | orchestrator | 2025-11-01 14:15:59.637490 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-11-01 14:15:59.637499 | orchestrator | Saturday 01 November 2025 14:14:27 +0000 (0:00:00.412) 0:00:22.852 ***** 2025-11-01 14:15:59.637509 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:15:59.637518 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:15:59.637544 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:15:59.637554 | orchestrator | 2025-11-01 14:15:59.637564 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-11-01 14:15:59.637573 | orchestrator | Saturday 01 November 2025 14:14:28 +0000 (0:00:00.346) 0:00:23.199 ***** 2025-11-01 14:15:59.637583 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:15:59.637592 | orchestrator | 2025-11-01 14:15:59.637602 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-11-01 14:15:59.637611 | orchestrator | Saturday 01 November 2025 14:14:28 +0000 (0:00:00.844) 0:00:24.044 ***** 2025-11-01 14:15:59.637636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-01 14:15:59.637658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-01 14:15:59.637682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-01 14:15:59.637693 | orchestrator | 2025-11-01 14:15:59.637703 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-11-01 14:15:59.637712 | orchestrator | Saturday 01 November 2025 14:14:30 +0000 (0:00:01.691) 0:00:25.735 ***** 2025-11-01 14:15:59.637731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-11-01 14:15:59.637748 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:15:59.637770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-11-01 14:15:59.637781 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:15:59.637792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-11-01 14:15:59.637809 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:15:59.637819 | orchestrator | 2025-11-01 14:15:59.637828 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-11-01 14:15:59.637838 | orchestrator | Saturday 01 November 2025 14:14:31 +0000 (0:00:00.642) 0:00:26.378 ***** 2025-11-01 14:15:59.637860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-11-01 14:15:59.637871 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:15:59.637886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-11-01 14:15:59.637903 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:15:59.637920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-11-01 14:15:59.637938 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:15:59.637948 | orchestrator | 2025-11-01 14:15:59.637958 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-11-01 14:15:59.637967 | orchestrator | Saturday 01 November 2025 14:14:32 +0000 (0:00:00.908) 0:00:27.286 ***** 2025-11-01 14:15:59.637983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-01 14:15:59.638001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-01 14:15:59.638065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-01 14:15:59.638079 | orchestrator | 2025-11-01 14:15:59.638089 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-11-01 14:15:59.638099 | orchestrator | Saturday 01 November 2025 14:14:33 +0000 (0:00:01.545) 0:00:28.832 ***** 2025-11-01 14:15:59.638108 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:15:59.638118 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:15:59.638128 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:15:59.638137 | orchestrator | 2025-11-01 14:15:59.638147 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-11-01 14:15:59.638156 | orchestrator | Saturday 01 November 2025 14:14:33 +0000 (0:00:00.313) 0:00:29.146 ***** 2025-11-01 14:15:59.638166 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:15:59.638176 | orchestrator | 2025-11-01 14:15:59.638185 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-11-01 14:15:59.638201 | orchestrator | Saturday 01 November 2025 14:14:34 +0000 (0:00:00.554) 0:00:29.701 ***** 2025-11-01 14:15:59.638211 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:15:59.638221 | orchestrator | 2025-11-01 14:15:59.638231 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-11-01 14:15:59.638246 | orchestrator | Saturday 01 November 2025 14:14:37 +0000 (0:00:02.843) 0:00:32.544 ***** 2025-11-01 14:15:59.638256 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:15:59.638266 | orchestrator | 2025-11-01 14:15:59.638275 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-11-01 14:15:59.638285 | orchestrator | Saturday 01 November 2025 14:14:40 +0000 (0:00:03.057) 0:00:35.601 ***** 2025-11-01 14:15:59.638295 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:15:59.638304 | orchestrator | 2025-11-01 14:15:59.638314 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-11-01 14:15:59.638323 | orchestrator | Saturday 01 November 2025 14:14:58 +0000 (0:00:17.823) 0:00:53.424 ***** 2025-11-01 14:15:59.638333 | orchestrator | 2025-11-01 14:15:59.638342 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-11-01 14:15:59.638352 | orchestrator | Saturday 01 November 2025 14:14:58 +0000 (0:00:00.071) 0:00:53.495 ***** 2025-11-01 14:15:59.638361 | orchestrator | 2025-11-01 14:15:59.638371 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-11-01 14:15:59.638381 | orchestrator | Saturday 01 November 2025 14:14:58 +0000 (0:00:00.067) 0:00:53.563 ***** 2025-11-01 14:15:59.638390 | orchestrator | 2025-11-01 14:15:59.638399 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-11-01 14:15:59.638409 | orchestrator | Saturday 01 November 2025 14:14:58 +0000 (0:00:00.069) 0:00:53.633 ***** 2025-11-01 14:15:59.638419 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:15:59.638428 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:15:59.638438 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:15:59.638447 | orchestrator | 2025-11-01 14:15:59.638457 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:15:59.638466 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-11-01 14:15:59.638476 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-11-01 14:15:59.638486 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-11-01 14:15:59.638496 | orchestrator | 2025-11-01 14:15:59.638505 | orchestrator | 2025-11-01 14:15:59.638515 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:15:59.638525 | orchestrator | Saturday 01 November 2025 14:15:56 +0000 (0:00:57.771) 0:01:51.404 ***** 2025-11-01 14:15:59.638549 | orchestrator | =============================================================================== 2025-11-01 14:15:59.638559 | orchestrator | horizon : Restart horizon container ------------------------------------ 57.77s 2025-11-01 14:15:59.638569 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.82s 2025-11-01 14:15:59.638578 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 3.06s 2025-11-01 14:15:59.638588 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.84s 2025-11-01 14:15:59.638597 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.53s 2025-11-01 14:15:59.638611 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.28s 2025-11-01 14:15:59.638621 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.03s 2025-11-01 14:15:59.638631 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.81s 2025-11-01 14:15:59.638641 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.69s 2025-11-01 14:15:59.638651 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.55s 2025-11-01 14:15:59.638660 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.17s 2025-11-01 14:15:59.638670 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.91s 2025-11-01 14:15:59.638686 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.84s 2025-11-01 14:15:59.638695 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.79s 2025-11-01 14:15:59.638705 | orchestrator | horizon : Update policy file name --------------------------------------- 0.66s 2025-11-01 14:15:59.638714 | orchestrator | horizon : Update policy file name --------------------------------------- 0.66s 2025-11-01 14:15:59.638724 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.64s 2025-11-01 14:15:59.638733 | orchestrator | horizon : Update policy file name --------------------------------------- 0.58s 2025-11-01 14:15:59.638743 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.56s 2025-11-01 14:15:59.638752 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.56s 2025-11-01 14:15:59.638762 | orchestrator | 2025-11-01 14:15:59 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:15:59.638772 | orchestrator | 2025-11-01 14:15:59 | INFO  | Task 027d4d49-2a6a-4f77-a12d-51cd0ffe7137 is in state STARTED 2025-11-01 14:15:59.638781 | orchestrator | 2025-11-01 14:15:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:16:02.683693 | orchestrator | 2025-11-01 14:16:02 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:16:02.686722 | orchestrator | 2025-11-01 14:16:02 | INFO  | Task 027d4d49-2a6a-4f77-a12d-51cd0ffe7137 is in state STARTED 2025-11-01 14:16:02.686754 | orchestrator | 2025-11-01 14:16:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:16:05.744356 | orchestrator | 2025-11-01 14:16:05 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:16:05.746310 | orchestrator | 2025-11-01 14:16:05 | INFO  | Task b1adacd1-85d9-46a2-817b-7d898a9eee9e is in state STARTED 2025-11-01 14:16:05.747972 | orchestrator | 2025-11-01 14:16:05 | INFO  | Task 027d4d49-2a6a-4f77-a12d-51cd0ffe7137 is in state SUCCESS 2025-11-01 14:16:05.748226 | orchestrator | 2025-11-01 14:16:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:16:08.803340 | orchestrator | 2025-11-01 14:16:08 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:16:08.805915 | orchestrator | 2025-11-01 14:16:08 | INFO  | Task b1adacd1-85d9-46a2-817b-7d898a9eee9e is in state STARTED 2025-11-01 14:16:08.805948 | orchestrator | 2025-11-01 14:16:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:16:11.862249 | orchestrator | 2025-11-01 14:16:11 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:16:11.863619 | orchestrator | 2025-11-01 14:16:11 | INFO  | Task b1adacd1-85d9-46a2-817b-7d898a9eee9e is in state STARTED 2025-11-01 14:16:11.863649 | orchestrator | 2025-11-01 14:16:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:16:14.912976 | orchestrator | 2025-11-01 14:16:14 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:16:14.914257 | orchestrator | 2025-11-01 14:16:14 | INFO  | Task b1adacd1-85d9-46a2-817b-7d898a9eee9e is in state STARTED 2025-11-01 14:16:14.914288 | orchestrator | 2025-11-01 14:16:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:16:17.963172 | orchestrator | 2025-11-01 14:16:17 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:16:17.963808 | orchestrator | 2025-11-01 14:16:17 | INFO  | Task b1adacd1-85d9-46a2-817b-7d898a9eee9e is in state STARTED 2025-11-01 14:16:17.964796 | orchestrator | 2025-11-01 14:16:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:16:21.002827 | orchestrator | 2025-11-01 14:16:20 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:16:21.004690 | orchestrator | 2025-11-01 14:16:21 | INFO  | Task b1adacd1-85d9-46a2-817b-7d898a9eee9e is in state STARTED 2025-11-01 14:16:21.004718 | orchestrator | 2025-11-01 14:16:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:16:24.043265 | orchestrator | 2025-11-01 14:16:24 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:16:24.046135 | orchestrator | 2025-11-01 14:16:24 | INFO  | Task b1adacd1-85d9-46a2-817b-7d898a9eee9e is in state STARTED 2025-11-01 14:16:24.046169 | orchestrator | 2025-11-01 14:16:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:16:27.092180 | orchestrator | 2025-11-01 14:16:27 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:16:27.094146 | orchestrator | 2025-11-01 14:16:27 | INFO  | Task b1adacd1-85d9-46a2-817b-7d898a9eee9e is in state STARTED 2025-11-01 14:16:27.094173 | orchestrator | 2025-11-01 14:16:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:16:30.136040 | orchestrator | 2025-11-01 14:16:30 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:16:30.137750 | orchestrator | 2025-11-01 14:16:30 | INFO  | Task b1adacd1-85d9-46a2-817b-7d898a9eee9e is in state STARTED 2025-11-01 14:16:30.137783 | orchestrator | 2025-11-01 14:16:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:16:33.181866 | orchestrator | 2025-11-01 14:16:33 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:16:33.182429 | orchestrator | 2025-11-01 14:16:33 | INFO  | Task b1adacd1-85d9-46a2-817b-7d898a9eee9e is in state STARTED 2025-11-01 14:16:33.183092 | orchestrator | 2025-11-01 14:16:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:16:36.225229 | orchestrator | 2025-11-01 14:16:36 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:16:36.226304 | orchestrator | 2025-11-01 14:16:36 | INFO  | Task b1adacd1-85d9-46a2-817b-7d898a9eee9e is in state STARTED 2025-11-01 14:16:36.226584 | orchestrator | 2025-11-01 14:16:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:16:39.273443 | orchestrator | 2025-11-01 14:16:39 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:16:39.274179 | orchestrator | 2025-11-01 14:16:39 | INFO  | Task b1adacd1-85d9-46a2-817b-7d898a9eee9e is in state STARTED 2025-11-01 14:16:39.274201 | orchestrator | 2025-11-01 14:16:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:16:42.323574 | orchestrator | 2025-11-01 14:16:42 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:16:42.325433 | orchestrator | 2025-11-01 14:16:42 | INFO  | Task b1adacd1-85d9-46a2-817b-7d898a9eee9e is in state STARTED 2025-11-01 14:16:42.325461 | orchestrator | 2025-11-01 14:16:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:16:45.374649 | orchestrator | 2025-11-01 14:16:45 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:16:45.375808 | orchestrator | 2025-11-01 14:16:45 | INFO  | Task b1adacd1-85d9-46a2-817b-7d898a9eee9e is in state STARTED 2025-11-01 14:16:45.376167 | orchestrator | 2025-11-01 14:16:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:16:48.423311 | orchestrator | 2025-11-01 14:16:48 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:16:48.425438 | orchestrator | 2025-11-01 14:16:48 | INFO  | Task b1adacd1-85d9-46a2-817b-7d898a9eee9e is in state STARTED 2025-11-01 14:16:48.425456 | orchestrator | 2025-11-01 14:16:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:16:51.469671 | orchestrator | 2025-11-01 14:16:51 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:16:51.470613 | orchestrator | 2025-11-01 14:16:51 | INFO  | Task b1adacd1-85d9-46a2-817b-7d898a9eee9e is in state STARTED 2025-11-01 14:16:51.470640 | orchestrator | 2025-11-01 14:16:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:16:54.516068 | orchestrator | 2025-11-01 14:16:54 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:16:54.517601 | orchestrator | 2025-11-01 14:16:54 | INFO  | Task b1adacd1-85d9-46a2-817b-7d898a9eee9e is in state STARTED 2025-11-01 14:16:54.517631 | orchestrator | 2025-11-01 14:16:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:16:57.567941 | orchestrator | 2025-11-01 14:16:57 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:16:57.570190 | orchestrator | 2025-11-01 14:16:57 | INFO  | Task b1adacd1-85d9-46a2-817b-7d898a9eee9e is in state STARTED 2025-11-01 14:16:57.570291 | orchestrator | 2025-11-01 14:16:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:17:00.620697 | orchestrator | 2025-11-01 14:17:00 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:17:00.621971 | orchestrator | 2025-11-01 14:17:00 | INFO  | Task b1adacd1-85d9-46a2-817b-7d898a9eee9e is in state STARTED 2025-11-01 14:17:00.622003 | orchestrator | 2025-11-01 14:17:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:17:03.666131 | orchestrator | 2025-11-01 14:17:03 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state STARTED 2025-11-01 14:17:03.668123 | orchestrator | 2025-11-01 14:17:03 | INFO  | Task b1adacd1-85d9-46a2-817b-7d898a9eee9e is in state STARTED 2025-11-01 14:17:03.668759 | orchestrator | 2025-11-01 14:17:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:17:06.709156 | orchestrator | 2025-11-01 14:17:06 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:17:06.710671 | orchestrator | 2025-11-01 14:17:06 | INFO  | Task bebc7161-f24f-4d86-a413-086a56062371 is in state SUCCESS 2025-11-01 14:17:06.712088 | orchestrator | 2025-11-01 14:17:06.712119 | orchestrator | 2025-11-01 14:17:06.712131 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-11-01 14:17:06.712143 | orchestrator | 2025-11-01 14:17:06.712154 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2025-11-01 14:17:06.712166 | orchestrator | Saturday 01 November 2025 14:15:28 +0000 (0:00:00.232) 0:00:00.232 ***** 2025-11-01 14:17:06.712177 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-11-01 14:17:06.712190 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-11-01 14:17:06.712201 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-11-01 14:17:06.712212 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-11-01 14:17:06.712223 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-11-01 14:17:06.712234 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-11-01 14:17:06.712245 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-11-01 14:17:06.712256 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-11-01 14:17:06.712267 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-11-01 14:17:06.712278 | orchestrator | 2025-11-01 14:17:06.712314 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-11-01 14:17:06.712326 | orchestrator | Saturday 01 November 2025 14:15:33 +0000 (0:00:05.111) 0:00:05.344 ***** 2025-11-01 14:17:06.712336 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-11-01 14:17:06.712347 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-11-01 14:17:06.712358 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-11-01 14:17:06.712369 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-11-01 14:17:06.712379 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-11-01 14:17:06.712390 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-11-01 14:17:06.712400 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-11-01 14:17:06.712411 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-11-01 14:17:06.712422 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-11-01 14:17:06.712432 | orchestrator | 2025-11-01 14:17:06.712443 | orchestrator | TASK [Create share directory] ************************************************** 2025-11-01 14:17:06.712454 | orchestrator | Saturday 01 November 2025 14:15:37 +0000 (0:00:04.453) 0:00:09.797 ***** 2025-11-01 14:17:06.712466 | orchestrator | changed: [testbed-manager -> localhost] 2025-11-01 14:17:06.712477 | orchestrator | 2025-11-01 14:17:06.712488 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-11-01 14:17:06.712498 | orchestrator | Saturday 01 November 2025 14:15:38 +0000 (0:00:01.135) 0:00:10.933 ***** 2025-11-01 14:17:06.712509 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-11-01 14:17:06.712549 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-11-01 14:17:06.712561 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-11-01 14:17:06.712572 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-11-01 14:17:06.712583 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-11-01 14:17:06.713197 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-11-01 14:17:06.713214 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-11-01 14:17:06.713225 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-11-01 14:17:06.713252 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-11-01 14:17:06.713263 | orchestrator | 2025-11-01 14:17:06.713275 | orchestrator | TASK [Check if target directories exist] *************************************** 2025-11-01 14:17:06.713285 | orchestrator | Saturday 01 November 2025 14:15:52 +0000 (0:00:14.200) 0:00:25.133 ***** 2025-11-01 14:17:06.713296 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2025-11-01 14:17:06.713308 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2025-11-01 14:17:06.713319 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2025-11-01 14:17:06.713330 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2025-11-01 14:17:06.713381 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2025-11-01 14:17:06.713393 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2025-11-01 14:17:06.713416 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2025-11-01 14:17:06.713427 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2025-11-01 14:17:06.713438 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2025-11-01 14:17:06.713449 | orchestrator | 2025-11-01 14:17:06.713459 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-11-01 14:17:06.713470 | orchestrator | Saturday 01 November 2025 14:15:56 +0000 (0:00:03.149) 0:00:28.283 ***** 2025-11-01 14:17:06.713482 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-11-01 14:17:06.713493 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-11-01 14:17:06.713503 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-11-01 14:17:06.713514 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-11-01 14:17:06.713552 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-11-01 14:17:06.713563 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-11-01 14:17:06.713574 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-11-01 14:17:06.713584 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-11-01 14:17:06.713595 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-11-01 14:17:06.713606 | orchestrator | 2025-11-01 14:17:06.713617 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:17:06.713628 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:17:06.713640 | orchestrator | 2025-11-01 14:17:06.713651 | orchestrator | 2025-11-01 14:17:06.713662 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:17:06.713673 | orchestrator | Saturday 01 November 2025 14:16:03 +0000 (0:00:07.263) 0:00:35.546 ***** 2025-11-01 14:17:06.713683 | orchestrator | =============================================================================== 2025-11-01 14:17:06.713694 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.20s 2025-11-01 14:17:06.713705 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.26s 2025-11-01 14:17:06.713716 | orchestrator | Check if ceph keys exist ------------------------------------------------ 5.11s 2025-11-01 14:17:06.713727 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.45s 2025-11-01 14:17:06.713737 | orchestrator | Check if target directories exist --------------------------------------- 3.15s 2025-11-01 14:17:06.713748 | orchestrator | Create share directory -------------------------------------------------- 1.14s 2025-11-01 14:17:06.713759 | orchestrator | 2025-11-01 14:17:06.713769 | orchestrator | 2025-11-01 14:17:06.713780 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 14:17:06.713791 | orchestrator | 2025-11-01 14:17:06.713802 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 14:17:06.713812 | orchestrator | Saturday 01 November 2025 14:14:05 +0000 (0:00:00.278) 0:00:00.278 ***** 2025-11-01 14:17:06.713823 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:17:06.713834 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:17:06.713846 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:17:06.713859 | orchestrator | 2025-11-01 14:17:06.713871 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 14:17:06.713885 | orchestrator | Saturday 01 November 2025 14:14:05 +0000 (0:00:00.312) 0:00:00.590 ***** 2025-11-01 14:17:06.713897 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-11-01 14:17:06.713909 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-11-01 14:17:06.713921 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-11-01 14:17:06.713942 | orchestrator | 2025-11-01 14:17:06.713954 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-11-01 14:17:06.713966 | orchestrator | 2025-11-01 14:17:06.713978 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-11-01 14:17:06.713991 | orchestrator | Saturday 01 November 2025 14:14:05 +0000 (0:00:00.504) 0:00:01.095 ***** 2025-11-01 14:17:06.714004 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:17:06.714089 | orchestrator | 2025-11-01 14:17:06.714113 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-11-01 14:17:06.714126 | orchestrator | Saturday 01 November 2025 14:14:06 +0000 (0:00:00.572) 0:00:01.667 ***** 2025-11-01 14:17:06.714184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 14:17:06.714204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 14:17:06.714220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 14:17:06.714233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-01 14:17:06.714260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-01 14:17:06.714303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-01 14:17:06.714316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 14:17:06.714328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 14:17:06.714339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 14:17:06.714350 | orchestrator | 2025-11-01 14:17:06.714361 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-11-01 14:17:06.714372 | orchestrator | Saturday 01 November 2025 14:14:08 +0000 (0:00:01.887) 0:00:03.555 ***** 2025-11-01 14:17:06.714383 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-11-01 14:17:06.714394 | orchestrator | 2025-11-01 14:17:06.714412 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-11-01 14:17:06.714423 | orchestrator | Saturday 01 November 2025 14:14:09 +0000 (0:00:00.916) 0:00:04.471 ***** 2025-11-01 14:17:06.714433 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:17:06.714444 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:17:06.714455 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:17:06.714465 | orchestrator | 2025-11-01 14:17:06.714476 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-11-01 14:17:06.714487 | orchestrator | Saturday 01 November 2025 14:14:09 +0000 (0:00:00.507) 0:00:04.978 ***** 2025-11-01 14:17:06.714498 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-01 14:17:06.714508 | orchestrator | 2025-11-01 14:17:06.714538 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-11-01 14:17:06.714550 | orchestrator | Saturday 01 November 2025 14:14:10 +0000 (0:00:00.765) 0:00:05.744 ***** 2025-11-01 14:17:06.714561 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:17:06.714572 | orchestrator | 2025-11-01 14:17:06.714582 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-11-01 14:17:06.714593 | orchestrator | Saturday 01 November 2025 14:14:11 +0000 (0:00:00.554) 0:00:06.299 ***** 2025-11-01 14:17:06.714616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 14:17:06.714629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 14:17:06.714642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 14:17:06.714661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-01 14:17:06.714673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-01 14:17:06.714689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-01 14:17:06.714711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 14:17:06.714723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 14:17:06.714734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 14:17:06.714752 | orchestrator | 2025-11-01 14:17:06.714763 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-11-01 14:17:06.714774 | orchestrator | Saturday 01 November 2025 14:14:14 +0000 (0:00:03.429) 0:00:09.728 ***** 2025-11-01 14:17:06.714786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-01 14:17:06.714802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 14:17:06.714820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-01 14:17:06.714833 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:17:06.714845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-01 14:17:06.714856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 14:17:06.714875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-01 14:17:06.714886 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:17:06.714898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-01 14:17:06.714920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 14:17:06.714937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-01 14:17:06.714949 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:17:06.714960 | orchestrator | 2025-11-01 14:17:06.714971 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-11-01 14:17:06.714982 | orchestrator | Saturday 01 November 2025 14:14:15 +0000 (0:00:01.022) 0:00:10.751 ***** 2025-11-01 14:17:06.714993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-01 14:17:06.715112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 14:17:06.715129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-01 14:17:06.715140 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:17:06.715158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-01 14:17:06.715178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 14:17:06.715190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-01 14:17:06.715209 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:17:06.715221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-01 14:17:06.715232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 14:17:06.715244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-01 14:17:06.715260 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:17:06.715271 | orchestrator | 2025-11-01 14:17:06.715282 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-11-01 14:17:06.715293 | orchestrator | Saturday 01 November 2025 14:14:16 +0000 (0:00:00.823) 0:00:11.575 ***** 2025-11-01 14:17:06.715311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 14:17:06.715324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 14:17:06.715343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 14:17:06.715355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-01 14:17:06.715372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-01 14:17:06.715390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-01 14:17:06.715402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 14:17:06.715420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 14:17:06.715431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 14:17:06.715443 | orchestrator | 2025-11-01 14:17:06.715454 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-11-01 14:17:06.715465 | orchestrator | Saturday 01 November 2025 14:14:19 +0000 (0:00:03.235) 0:00:14.811 ***** 2025-11-01 14:17:06.715476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 14:17:06.715492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 14:17:06.715512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 14:17:06.715578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 14:17:06.715592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 14:17:06.715604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 14:17:06.715621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 14:17:06.715639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 14:17:06.715657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 14:17:06.715668 | orchestrator | 2025-11-01 14:17:06.715680 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-11-01 14:17:06.715691 | orchestrator | Saturday 01 November 2025 14:14:25 +0000 (0:00:05.975) 0:00:20.786 ***** 2025-11-01 14:17:06.715702 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:17:06.715712 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:17:06.715723 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:17:06.715734 | orchestrator | 2025-11-01 14:17:06.715746 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-11-01 14:17:06.715759 | orchestrator | Saturday 01 November 2025 14:14:27 +0000 (0:00:01.671) 0:00:22.457 ***** 2025-11-01 14:17:06.715771 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:17:06.715783 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:17:06.715795 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:17:06.715807 | orchestrator | 2025-11-01 14:17:06.715819 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-11-01 14:17:06.715832 | orchestrator | Saturday 01 November 2025 14:14:27 +0000 (0:00:00.582) 0:00:23.040 ***** 2025-11-01 14:17:06.715844 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:17:06.715856 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:17:06.715868 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:17:06.715880 | orchestrator | 2025-11-01 14:17:06.715892 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-11-01 14:17:06.715904 | orchestrator | Saturday 01 November 2025 14:14:28 +0000 (0:00:00.341) 0:00:23.382 ***** 2025-11-01 14:17:06.715917 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:17:06.715929 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:17:06.715941 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:17:06.715954 | orchestrator | 2025-11-01 14:17:06.715965 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-11-01 14:17:06.715978 | orchestrator | Saturday 01 November 2025 14:14:28 +0000 (0:00:00.525) 0:00:23.908 ***** 2025-11-01 14:17:06.715991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 14:17:06.716010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 14:17:06.716039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 14:17:06.716053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 14:17:06.716068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 14:17:06.716081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 14:17:06.716098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 14:17:06.716120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 14:17:06.716138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 14:17:06.716149 | orchestrator | 2025-11-01 14:17:06.716160 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-11-01 14:17:06.716171 | orchestrator | Saturday 01 November 2025 14:14:31 +0000 (0:00:02.438) 0:00:26.346 ***** 2025-11-01 14:17:06.716182 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:17:06.716193 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:17:06.716204 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:17:06.716215 | orchestrator | 2025-11-01 14:17:06.716225 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-11-01 14:17:06.716236 | orchestrator | Saturday 01 November 2025 14:14:31 +0000 (0:00:00.320) 0:00:26.666 ***** 2025-11-01 14:17:06.716247 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-11-01 14:17:06.716257 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-11-01 14:17:06.716268 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-11-01 14:17:06.716279 | orchestrator | 2025-11-01 14:17:06.716290 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-11-01 14:17:06.716301 | orchestrator | Saturday 01 November 2025 14:14:33 +0000 (0:00:01.825) 0:00:28.492 ***** 2025-11-01 14:17:06.716311 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-01 14:17:06.716322 | orchestrator | 2025-11-01 14:17:06.716333 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-11-01 14:17:06.716343 | orchestrator | Saturday 01 November 2025 14:14:34 +0000 (0:00:00.973) 0:00:29.465 ***** 2025-11-01 14:17:06.716354 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:17:06.716365 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:17:06.716376 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:17:06.716386 | orchestrator | 2025-11-01 14:17:06.716397 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-11-01 14:17:06.716408 | orchestrator | Saturday 01 November 2025 14:14:35 +0000 (0:00:00.881) 0:00:30.346 ***** 2025-11-01 14:17:06.716418 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-11-01 14:17:06.716429 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-01 14:17:06.716440 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-11-01 14:17:06.716451 | orchestrator | 2025-11-01 14:17:06.716461 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-11-01 14:17:06.716472 | orchestrator | Saturday 01 November 2025 14:14:36 +0000 (0:00:01.254) 0:00:31.601 ***** 2025-11-01 14:17:06.716489 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:17:06.716500 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:17:06.716511 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:17:06.716536 | orchestrator | 2025-11-01 14:17:06.716547 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-11-01 14:17:06.716558 | orchestrator | Saturday 01 November 2025 14:14:36 +0000 (0:00:00.338) 0:00:31.939 ***** 2025-11-01 14:17:06.716569 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-11-01 14:17:06.716580 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-11-01 14:17:06.716590 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-11-01 14:17:06.716601 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-11-01 14:17:06.716611 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-11-01 14:17:06.716622 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-11-01 14:17:06.716633 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-11-01 14:17:06.716644 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-11-01 14:17:06.716660 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-11-01 14:17:06.716671 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-11-01 14:17:06.716681 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-11-01 14:17:06.716692 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-11-01 14:17:06.716703 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-11-01 14:17:06.716713 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-11-01 14:17:06.716724 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-11-01 14:17:06.716740 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-11-01 14:17:06.716752 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-11-01 14:17:06.716763 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-11-01 14:17:06.716773 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-11-01 14:17:06.716784 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-11-01 14:17:06.716795 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-11-01 14:17:06.716805 | orchestrator | 2025-11-01 14:17:06.716816 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-11-01 14:17:06.716826 | orchestrator | Saturday 01 November 2025 14:14:46 +0000 (0:00:09.307) 0:00:41.247 ***** 2025-11-01 14:17:06.716837 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-11-01 14:17:06.716848 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-11-01 14:17:06.716858 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-11-01 14:17:06.716869 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-11-01 14:17:06.716880 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-11-01 14:17:06.716897 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-11-01 14:17:06.716908 | orchestrator | 2025-11-01 14:17:06.716919 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-11-01 14:17:06.716929 | orchestrator | Saturday 01 November 2025 14:14:49 +0000 (0:00:03.061) 0:00:44.309 ***** 2025-11-01 14:17:06.716941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 14:17:06.716958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 14:17:06.716977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 14:17:06.716990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-01 14:17:06.717007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-01 14:17:06.717019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-01 14:17:06.717030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 14:17:06.717041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 14:17:06.717057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 14:17:06.717069 | orchestrator | 2025-11-01 14:17:06.717080 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-11-01 14:17:06.717096 | orchestrator | Saturday 01 November 2025 14:14:51 +0000 (0:00:02.440) 0:00:46.750 ***** 2025-11-01 14:17:06.717107 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:17:06.717118 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:17:06.717128 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:17:06.717139 | orchestrator | 2025-11-01 14:17:06.717150 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-11-01 14:17:06.717160 | orchestrator | Saturday 01 November 2025 14:14:51 +0000 (0:00:00.303) 0:00:47.053 ***** 2025-11-01 14:17:06.717171 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:17:06.717182 | orchestrator | 2025-11-01 14:17:06.717192 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-11-01 14:17:06.717211 | orchestrator | Saturday 01 November 2025 14:14:54 +0000 (0:00:02.594) 0:00:49.648 ***** 2025-11-01 14:17:06.717222 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:17:06.717232 | orchestrator | 2025-11-01 14:17:06.717243 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-11-01 14:17:06.717254 | orchestrator | Saturday 01 November 2025 14:14:56 +0000 (0:00:02.505) 0:00:52.153 ***** 2025-11-01 14:17:06.717264 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:17:06.717275 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:17:06.717286 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:17:06.717297 | orchestrator | 2025-11-01 14:17:06.717307 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-11-01 14:17:06.717318 | orchestrator | Saturday 01 November 2025 14:14:57 +0000 (0:00:01.059) 0:00:53.212 ***** 2025-11-01 14:17:06.717329 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:17:06.717340 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:17:06.717350 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:17:06.717361 | orchestrator | 2025-11-01 14:17:06.717372 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-11-01 14:17:06.717382 | orchestrator | Saturday 01 November 2025 14:14:58 +0000 (0:00:00.363) 0:00:53.576 ***** 2025-11-01 14:17:06.717393 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:17:06.717404 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:17:06.717415 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:17:06.717426 | orchestrator | 2025-11-01 14:17:06.717436 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-11-01 14:17:06.717447 | orchestrator | Saturday 01 November 2025 14:14:58 +0000 (0:00:00.416) 0:00:53.992 ***** 2025-11-01 14:17:06.717458 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:17:06.717469 | orchestrator | 2025-11-01 14:17:06.717479 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-11-01 14:17:06.717490 | orchestrator | Saturday 01 November 2025 14:15:15 +0000 (0:00:16.292) 0:01:10.285 ***** 2025-11-01 14:17:06.717501 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:17:06.717512 | orchestrator | 2025-11-01 14:17:06.717574 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-11-01 14:17:06.717586 | orchestrator | Saturday 01 November 2025 14:15:27 +0000 (0:00:12.090) 0:01:22.375 ***** 2025-11-01 14:17:06.717597 | orchestrator | 2025-11-01 14:17:06.717607 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-11-01 14:17:06.717618 | orchestrator | Saturday 01 November 2025 14:15:27 +0000 (0:00:00.071) 0:01:22.447 ***** 2025-11-01 14:17:06.717629 | orchestrator | 2025-11-01 14:17:06.717640 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-11-01 14:17:06.717651 | orchestrator | Saturday 01 November 2025 14:15:27 +0000 (0:00:00.068) 0:01:22.515 ***** 2025-11-01 14:17:06.717661 | orchestrator | 2025-11-01 14:17:06.717672 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-11-01 14:17:06.717682 | orchestrator | Saturday 01 November 2025 14:15:27 +0000 (0:00:00.079) 0:01:22.595 ***** 2025-11-01 14:17:06.717693 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:17:06.717704 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:17:06.717715 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:17:06.717725 | orchestrator | 2025-11-01 14:17:06.717736 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-11-01 14:17:06.717747 | orchestrator | Saturday 01 November 2025 14:15:51 +0000 (0:00:24.394) 0:01:46.990 ***** 2025-11-01 14:17:06.717757 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:17:06.717768 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:17:06.717779 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:17:06.717789 | orchestrator | 2025-11-01 14:17:06.717800 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-11-01 14:17:06.717810 | orchestrator | Saturday 01 November 2025 14:16:01 +0000 (0:00:10.067) 0:01:57.057 ***** 2025-11-01 14:17:06.717820 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:17:06.717835 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:17:06.717845 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:17:06.717855 | orchestrator | 2025-11-01 14:17:06.717864 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-11-01 14:17:06.717874 | orchestrator | Saturday 01 November 2025 14:16:09 +0000 (0:00:07.539) 0:02:04.597 ***** 2025-11-01 14:17:06.717884 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:17:06.717893 | orchestrator | 2025-11-01 14:17:06.717907 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-11-01 14:17:06.717917 | orchestrator | Saturday 01 November 2025 14:16:10 +0000 (0:00:00.777) 0:02:05.374 ***** 2025-11-01 14:17:06.717927 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:17:06.717937 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:17:06.717946 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:17:06.717956 | orchestrator | 2025-11-01 14:17:06.717965 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-11-01 14:17:06.717975 | orchestrator | Saturday 01 November 2025 14:16:10 +0000 (0:00:00.813) 0:02:06.188 ***** 2025-11-01 14:17:06.717984 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:17:06.717994 | orchestrator | 2025-11-01 14:17:06.718003 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-11-01 14:17:06.718038 | orchestrator | Saturday 01 November 2025 14:16:12 +0000 (0:00:01.847) 0:02:08.036 ***** 2025-11-01 14:17:06.718051 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-11-01 14:17:06.718060 | orchestrator | 2025-11-01 14:17:06.718076 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-11-01 14:17:06.718086 | orchestrator | Saturday 01 November 2025 14:16:25 +0000 (0:00:12.551) 0:02:20.587 ***** 2025-11-01 14:17:06.718095 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-11-01 14:17:06.718105 | orchestrator | 2025-11-01 14:17:06.718114 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-11-01 14:17:06.718124 | orchestrator | Saturday 01 November 2025 14:16:51 +0000 (0:00:25.952) 0:02:46.539 ***** 2025-11-01 14:17:06.718134 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-11-01 14:17:06.718143 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-11-01 14:17:06.718153 | orchestrator | 2025-11-01 14:17:06.718162 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-11-01 14:17:06.718172 | orchestrator | Saturday 01 November 2025 14:16:58 +0000 (0:00:07.393) 0:02:53.933 ***** 2025-11-01 14:17:06.718181 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:17:06.718190 | orchestrator | 2025-11-01 14:17:06.718200 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-11-01 14:17:06.718209 | orchestrator | Saturday 01 November 2025 14:16:58 +0000 (0:00:00.121) 0:02:54.055 ***** 2025-11-01 14:17:06.718219 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:17:06.718228 | orchestrator | 2025-11-01 14:17:06.718238 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-11-01 14:17:06.718247 | orchestrator | Saturday 01 November 2025 14:16:58 +0000 (0:00:00.126) 0:02:54.181 ***** 2025-11-01 14:17:06.718257 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:17:06.718266 | orchestrator | 2025-11-01 14:17:06.718275 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-11-01 14:17:06.718285 | orchestrator | Saturday 01 November 2025 14:16:59 +0000 (0:00:00.149) 0:02:54.331 ***** 2025-11-01 14:17:06.718295 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:17:06.718304 | orchestrator | 2025-11-01 14:17:06.718314 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-11-01 14:17:06.718323 | orchestrator | Saturday 01 November 2025 14:16:59 +0000 (0:00:00.532) 0:02:54.864 ***** 2025-11-01 14:17:06.718332 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:17:06.718348 | orchestrator | 2025-11-01 14:17:06.718358 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-11-01 14:17:06.718368 | orchestrator | Saturday 01 November 2025 14:17:02 +0000 (0:00:03.370) 0:02:58.234 ***** 2025-11-01 14:17:06.718377 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:17:06.718386 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:17:06.718396 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:17:06.718405 | orchestrator | 2025-11-01 14:17:06.718415 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:17:06.718425 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-11-01 14:17:06.718436 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-11-01 14:17:06.718446 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-11-01 14:17:06.718455 | orchestrator | 2025-11-01 14:17:06.718465 | orchestrator | 2025-11-01 14:17:06.718474 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:17:06.718483 | orchestrator | Saturday 01 November 2025 14:17:03 +0000 (0:00:00.465) 0:02:58.700 ***** 2025-11-01 14:17:06.718493 | orchestrator | =============================================================================== 2025-11-01 14:17:06.718502 | orchestrator | service-ks-register : keystone | Creating services --------------------- 25.95s 2025-11-01 14:17:06.718512 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 24.39s 2025-11-01 14:17:06.718537 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 16.29s 2025-11-01 14:17:06.718547 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.55s 2025-11-01 14:17:06.718557 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 12.09s 2025-11-01 14:17:06.718566 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.07s 2025-11-01 14:17:06.718575 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.31s 2025-11-01 14:17:06.718585 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.54s 2025-11-01 14:17:06.718594 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.39s 2025-11-01 14:17:06.718609 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.98s 2025-11-01 14:17:06.718619 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.43s 2025-11-01 14:17:06.718628 | orchestrator | keystone : Creating default user role ----------------------------------- 3.37s 2025-11-01 14:17:06.718638 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.24s 2025-11-01 14:17:06.718647 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.06s 2025-11-01 14:17:06.718657 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.59s 2025-11-01 14:17:06.718666 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.51s 2025-11-01 14:17:06.718676 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.44s 2025-11-01 14:17:06.718685 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.44s 2025-11-01 14:17:06.718700 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.89s 2025-11-01 14:17:06.718709 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.85s 2025-11-01 14:17:06.718719 | orchestrator | 2025-11-01 14:17:06 | INFO  | Task b1adacd1-85d9-46a2-817b-7d898a9eee9e is in state STARTED 2025-11-01 14:17:06.718729 | orchestrator | 2025-11-01 14:17:06 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:17:06.718738 | orchestrator | 2025-11-01 14:17:06 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:17:06.719000 | orchestrator | 2025-11-01 14:17:06 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:17:06.719020 | orchestrator | 2025-11-01 14:17:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:17:09.752094 | orchestrator | 2025-11-01 14:17:09 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:17:09.752352 | orchestrator | 2025-11-01 14:17:09 | INFO  | Task b1adacd1-85d9-46a2-817b-7d898a9eee9e is in state STARTED 2025-11-01 14:17:09.753254 | orchestrator | 2025-11-01 14:17:09 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:17:09.756627 | orchestrator | 2025-11-01 14:17:09 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:17:09.758014 | orchestrator | 2025-11-01 14:17:09 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:17:09.758082 | orchestrator | 2025-11-01 14:17:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:17:12.799933 | orchestrator | 2025-11-01 14:17:12 | INFO  | Task f87b06e5-800b-48c6-9b98-d93d16c0257c is in state STARTED 2025-11-01 14:17:12.802506 | orchestrator | 2025-11-01 14:17:12 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:17:12.807025 | orchestrator | 2025-11-01 14:17:12 | INFO  | Task b1adacd1-85d9-46a2-817b-7d898a9eee9e is in state SUCCESS 2025-11-01 14:17:12.809322 | orchestrator | 2025-11-01 14:17:12 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:17:12.811943 | orchestrator | 2025-11-01 14:17:12 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:17:12.814228 | orchestrator | 2025-11-01 14:17:12 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:17:12.814384 | orchestrator | 2025-11-01 14:17:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:17:15.861362 | orchestrator | 2025-11-01 14:17:15 | INFO  | Task f87b06e5-800b-48c6-9b98-d93d16c0257c is in state STARTED 2025-11-01 14:17:15.862948 | orchestrator | 2025-11-01 14:17:15 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:17:15.865291 | orchestrator | 2025-11-01 14:17:15 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:17:15.867702 | orchestrator | 2025-11-01 14:17:15 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:17:15.868673 | orchestrator | 2025-11-01 14:17:15 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:17:15.868957 | orchestrator | 2025-11-01 14:17:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:17:18.920666 | orchestrator | 2025-11-01 14:17:18 | INFO  | Task f87b06e5-800b-48c6-9b98-d93d16c0257c is in state STARTED 2025-11-01 14:17:18.921060 | orchestrator | 2025-11-01 14:17:18 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:17:18.923164 | orchestrator | 2025-11-01 14:17:18 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:17:18.925622 | orchestrator | 2025-11-01 14:17:18 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:17:18.926907 | orchestrator | 2025-11-01 14:17:18 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:17:18.926935 | orchestrator | 2025-11-01 14:17:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:17:21.980692 | orchestrator | 2025-11-01 14:17:21 | INFO  | Task f87b06e5-800b-48c6-9b98-d93d16c0257c is in state STARTED 2025-11-01 14:17:21.983202 | orchestrator | 2025-11-01 14:17:21 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:17:21.985770 | orchestrator | 2025-11-01 14:17:21 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:17:21.989738 | orchestrator | 2025-11-01 14:17:21 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:17:21.992445 | orchestrator | 2025-11-01 14:17:21 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:17:21.992752 | orchestrator | 2025-11-01 14:17:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:17:25.048483 | orchestrator | 2025-11-01 14:17:25 | INFO  | Task f87b06e5-800b-48c6-9b98-d93d16c0257c is in state STARTED 2025-11-01 14:17:25.049514 | orchestrator | 2025-11-01 14:17:25 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:17:25.050755 | orchestrator | 2025-11-01 14:17:25 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:17:25.052075 | orchestrator | 2025-11-01 14:17:25 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:17:25.053220 | orchestrator | 2025-11-01 14:17:25 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:17:25.053247 | orchestrator | 2025-11-01 14:17:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:17:28.101442 | orchestrator | 2025-11-01 14:17:28 | INFO  | Task f87b06e5-800b-48c6-9b98-d93d16c0257c is in state STARTED 2025-11-01 14:17:28.106730 | orchestrator | 2025-11-01 14:17:28 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:17:28.110567 | orchestrator | 2025-11-01 14:17:28 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:17:28.114260 | orchestrator | 2025-11-01 14:17:28 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:17:28.119074 | orchestrator | 2025-11-01 14:17:28 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:17:28.120083 | orchestrator | 2025-11-01 14:17:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:17:31.167864 | orchestrator | 2025-11-01 14:17:31 | INFO  | Task f87b06e5-800b-48c6-9b98-d93d16c0257c is in state STARTED 2025-11-01 14:17:31.170836 | orchestrator | 2025-11-01 14:17:31 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:17:31.173054 | orchestrator | 2025-11-01 14:17:31 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:17:31.174986 | orchestrator | 2025-11-01 14:17:31 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:17:31.176918 | orchestrator | 2025-11-01 14:17:31 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:17:31.177277 | orchestrator | 2025-11-01 14:17:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:17:34.220598 | orchestrator | 2025-11-01 14:17:34 | INFO  | Task f87b06e5-800b-48c6-9b98-d93d16c0257c is in state STARTED 2025-11-01 14:17:34.221807 | orchestrator | 2025-11-01 14:17:34 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:17:34.224828 | orchestrator | 2025-11-01 14:17:34 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:17:34.226625 | orchestrator | 2025-11-01 14:17:34 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:17:34.227843 | orchestrator | 2025-11-01 14:17:34 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:17:34.228199 | orchestrator | 2025-11-01 14:17:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:17:37.276255 | orchestrator | 2025-11-01 14:17:37 | INFO  | Task f87b06e5-800b-48c6-9b98-d93d16c0257c is in state STARTED 2025-11-01 14:17:37.278675 | orchestrator | 2025-11-01 14:17:37 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:17:37.280839 | orchestrator | 2025-11-01 14:17:37 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:17:37.282883 | orchestrator | 2025-11-01 14:17:37 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:17:37.284918 | orchestrator | 2025-11-01 14:17:37 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:17:37.284940 | orchestrator | 2025-11-01 14:17:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:17:40.322403 | orchestrator | 2025-11-01 14:17:40 | INFO  | Task f87b06e5-800b-48c6-9b98-d93d16c0257c is in state STARTED 2025-11-01 14:17:40.323352 | orchestrator | 2025-11-01 14:17:40 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:17:40.324073 | orchestrator | 2025-11-01 14:17:40 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:17:40.325123 | orchestrator | 2025-11-01 14:17:40 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:17:40.326457 | orchestrator | 2025-11-01 14:17:40 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:17:40.326484 | orchestrator | 2025-11-01 14:17:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:17:43.368226 | orchestrator | 2025-11-01 14:17:43 | INFO  | Task f87b06e5-800b-48c6-9b98-d93d16c0257c is in state STARTED 2025-11-01 14:17:43.371830 | orchestrator | 2025-11-01 14:17:43 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:17:43.374662 | orchestrator | 2025-11-01 14:17:43 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:17:43.376452 | orchestrator | 2025-11-01 14:17:43 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:17:43.378729 | orchestrator | 2025-11-01 14:17:43 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:17:43.378754 | orchestrator | 2025-11-01 14:17:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:17:46.426151 | orchestrator | 2025-11-01 14:17:46 | INFO  | Task f87b06e5-800b-48c6-9b98-d93d16c0257c is in state STARTED 2025-11-01 14:17:46.427996 | orchestrator | 2025-11-01 14:17:46 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:17:46.429939 | orchestrator | 2025-11-01 14:17:46 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:17:46.432626 | orchestrator | 2025-11-01 14:17:46 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:17:46.435016 | orchestrator | 2025-11-01 14:17:46 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:17:46.435037 | orchestrator | 2025-11-01 14:17:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:17:49.466438 | orchestrator | 2025-11-01 14:17:49 | INFO  | Task f87b06e5-800b-48c6-9b98-d93d16c0257c is in state STARTED 2025-11-01 14:17:49.471011 | orchestrator | 2025-11-01 14:17:49 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:17:49.473285 | orchestrator | 2025-11-01 14:17:49 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:17:49.476106 | orchestrator | 2025-11-01 14:17:49 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:17:49.480199 | orchestrator | 2025-11-01 14:17:49 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:17:49.480550 | orchestrator | 2025-11-01 14:17:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:17:52.512235 | orchestrator | 2025-11-01 14:17:52 | INFO  | Task f87b06e5-800b-48c6-9b98-d93d16c0257c is in state STARTED 2025-11-01 14:17:52.512782 | orchestrator | 2025-11-01 14:17:52 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:17:52.513641 | orchestrator | 2025-11-01 14:17:52 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:17:52.514479 | orchestrator | 2025-11-01 14:17:52 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:17:52.515393 | orchestrator | 2025-11-01 14:17:52 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:17:52.515497 | orchestrator | 2025-11-01 14:17:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:17:55.546284 | orchestrator | 2025-11-01 14:17:55 | INFO  | Task f87b06e5-800b-48c6-9b98-d93d16c0257c is in state STARTED 2025-11-01 14:17:55.546762 | orchestrator | 2025-11-01 14:17:55 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:17:55.548272 | orchestrator | 2025-11-01 14:17:55 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:17:55.549036 | orchestrator | 2025-11-01 14:17:55 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:17:55.549880 | orchestrator | 2025-11-01 14:17:55 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:17:55.550447 | orchestrator | 2025-11-01 14:17:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:17:58.585458 | orchestrator | 2025-11-01 14:17:58 | INFO  | Task f87b06e5-800b-48c6-9b98-d93d16c0257c is in state STARTED 2025-11-01 14:17:58.587257 | orchestrator | 2025-11-01 14:17:58 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:17:58.588020 | orchestrator | 2025-11-01 14:17:58 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:17:58.589783 | orchestrator | 2025-11-01 14:17:58 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:17:58.591567 | orchestrator | 2025-11-01 14:17:58 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:17:58.591597 | orchestrator | 2025-11-01 14:17:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:18:01.629066 | orchestrator | 2025-11-01 14:18:01 | INFO  | Task f87b06e5-800b-48c6-9b98-d93d16c0257c is in state STARTED 2025-11-01 14:18:01.629222 | orchestrator | 2025-11-01 14:18:01 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:18:01.630100 | orchestrator | 2025-11-01 14:18:01 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:18:01.630990 | orchestrator | 2025-11-01 14:18:01 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:18:01.631601 | orchestrator | 2025-11-01 14:18:01 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:18:01.631625 | orchestrator | 2025-11-01 14:18:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:18:04.663946 | orchestrator | 2025-11-01 14:18:04 | INFO  | Task f87b06e5-800b-48c6-9b98-d93d16c0257c is in state STARTED 2025-11-01 14:18:04.664716 | orchestrator | 2025-11-01 14:18:04 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:18:04.665182 | orchestrator | 2025-11-01 14:18:04 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:18:04.665791 | orchestrator | 2025-11-01 14:18:04 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:18:04.666588 | orchestrator | 2025-11-01 14:18:04 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:18:04.666619 | orchestrator | 2025-11-01 14:18:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:18:07.691865 | orchestrator | 2025-11-01 14:18:07 | INFO  | Task f87b06e5-800b-48c6-9b98-d93d16c0257c is in state STARTED 2025-11-01 14:18:07.692257 | orchestrator | 2025-11-01 14:18:07 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:18:07.693292 | orchestrator | 2025-11-01 14:18:07 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:18:07.694126 | orchestrator | 2025-11-01 14:18:07 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:18:07.694801 | orchestrator | 2025-11-01 14:18:07 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:18:07.695568 | orchestrator | 2025-11-01 14:18:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:18:10.731947 | orchestrator | 2025-11-01 14:18:10 | INFO  | Task f87b06e5-800b-48c6-9b98-d93d16c0257c is in state STARTED 2025-11-01 14:18:10.732323 | orchestrator | 2025-11-01 14:18:10 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:18:10.733142 | orchestrator | 2025-11-01 14:18:10 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:18:10.735221 | orchestrator | 2025-11-01 14:18:10 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:18:10.735945 | orchestrator | 2025-11-01 14:18:10 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:18:10.735972 | orchestrator | 2025-11-01 14:18:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:18:13.765479 | orchestrator | 2025-11-01 14:18:13 | INFO  | Task f87b06e5-800b-48c6-9b98-d93d16c0257c is in state STARTED 2025-11-01 14:18:13.765629 | orchestrator | 2025-11-01 14:18:13 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:18:13.766070 | orchestrator | 2025-11-01 14:18:13 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:18:13.766761 | orchestrator | 2025-11-01 14:18:13 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:18:13.767482 | orchestrator | 2025-11-01 14:18:13 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:18:13.767502 | orchestrator | 2025-11-01 14:18:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:18:16.825118 | orchestrator | 2025-11-01 14:18:16 | INFO  | Task f87b06e5-800b-48c6-9b98-d93d16c0257c is in state STARTED 2025-11-01 14:18:16.826140 | orchestrator | 2025-11-01 14:18:16 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:18:16.827200 | orchestrator | 2025-11-01 14:18:16 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:18:16.828331 | orchestrator | 2025-11-01 14:18:16 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:18:16.829457 | orchestrator | 2025-11-01 14:18:16 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:18:16.829702 | orchestrator | 2025-11-01 14:18:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:18:19.865819 | orchestrator | 2025-11-01 14:18:19 | INFO  | Task f87b06e5-800b-48c6-9b98-d93d16c0257c is in state STARTED 2025-11-01 14:18:19.866788 | orchestrator | 2025-11-01 14:18:19 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:18:19.867966 | orchestrator | 2025-11-01 14:18:19 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:18:19.868790 | orchestrator | 2025-11-01 14:18:19 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:18:19.869804 | orchestrator | 2025-11-01 14:18:19 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:18:19.869907 | orchestrator | 2025-11-01 14:18:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:18:22.909563 | orchestrator | 2025-11-01 14:18:22 | INFO  | Task f87b06e5-800b-48c6-9b98-d93d16c0257c is in state STARTED 2025-11-01 14:18:22.912292 | orchestrator | 2025-11-01 14:18:22 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:18:22.913829 | orchestrator | 2025-11-01 14:18:22 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:18:22.915913 | orchestrator | 2025-11-01 14:18:22 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:18:22.917955 | orchestrator | 2025-11-01 14:18:22 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:18:22.919731 | orchestrator | 2025-11-01 14:18:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:18:25.957005 | orchestrator | 2025-11-01 14:18:25 | INFO  | Task f87b06e5-800b-48c6-9b98-d93d16c0257c is in state STARTED 2025-11-01 14:18:25.958121 | orchestrator | 2025-11-01 14:18:25 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:18:25.959319 | orchestrator | 2025-11-01 14:18:25 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:18:25.962579 | orchestrator | 2025-11-01 14:18:25 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:18:25.963497 | orchestrator | 2025-11-01 14:18:25 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:18:25.963811 | orchestrator | 2025-11-01 14:18:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:18:29.000946 | orchestrator | 2025-11-01 14:18:28 | INFO  | Task f87b06e5-800b-48c6-9b98-d93d16c0257c is in state STARTED 2025-11-01 14:18:29.001606 | orchestrator | 2025-11-01 14:18:29 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:18:29.005326 | orchestrator | 2025-11-01 14:18:29 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:18:29.006217 | orchestrator | 2025-11-01 14:18:29 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:18:29.007076 | orchestrator | 2025-11-01 14:18:29 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:18:29.007099 | orchestrator | 2025-11-01 14:18:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:18:32.042241 | orchestrator | 2025-11-01 14:18:32 | INFO  | Task f87b06e5-800b-48c6-9b98-d93d16c0257c is in state STARTED 2025-11-01 14:18:32.044023 | orchestrator | 2025-11-01 14:18:32 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:18:32.045610 | orchestrator | 2025-11-01 14:18:32 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:18:32.046729 | orchestrator | 2025-11-01 14:18:32 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:18:32.048012 | orchestrator | 2025-11-01 14:18:32 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:18:32.048105 | orchestrator | 2025-11-01 14:18:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:18:35.076380 | orchestrator | 2025-11-01 14:18:35 | INFO  | Task f87b06e5-800b-48c6-9b98-d93d16c0257c is in state STARTED 2025-11-01 14:18:35.076934 | orchestrator | 2025-11-01 14:18:35 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:18:35.077755 | orchestrator | 2025-11-01 14:18:35 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:18:35.080291 | orchestrator | 2025-11-01 14:18:35 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:18:35.082548 | orchestrator | 2025-11-01 14:18:35 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:18:35.082575 | orchestrator | 2025-11-01 14:18:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:18:38.115306 | orchestrator | 2025-11-01 14:18:38.115425 | orchestrator | 2025-11-01 14:18:38.115456 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-11-01 14:18:38.115468 | orchestrator | 2025-11-01 14:18:38.115479 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-11-01 14:18:38.115491 | orchestrator | Saturday 01 November 2025 14:16:08 +0000 (0:00:00.265) 0:00:00.265 ***** 2025-11-01 14:18:38.115502 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-11-01 14:18:38.115562 | orchestrator | 2025-11-01 14:18:38.115574 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-11-01 14:18:38.115585 | orchestrator | Saturday 01 November 2025 14:16:08 +0000 (0:00:00.232) 0:00:00.497 ***** 2025-11-01 14:18:38.115597 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-11-01 14:18:38.115608 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-11-01 14:18:38.115619 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-11-01 14:18:38.115629 | orchestrator | 2025-11-01 14:18:38.115640 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-11-01 14:18:38.115651 | orchestrator | Saturday 01 November 2025 14:16:10 +0000 (0:00:01.316) 0:00:01.814 ***** 2025-11-01 14:18:38.115662 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-11-01 14:18:38.115672 | orchestrator | 2025-11-01 14:18:38.115683 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-11-01 14:18:38.115694 | orchestrator | Saturday 01 November 2025 14:16:11 +0000 (0:00:01.557) 0:00:03.371 ***** 2025-11-01 14:18:38.115704 | orchestrator | changed: [testbed-manager] 2025-11-01 14:18:38.115715 | orchestrator | 2025-11-01 14:18:38.115726 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-11-01 14:18:38.115736 | orchestrator | Saturday 01 November 2025 14:16:12 +0000 (0:00:00.951) 0:00:04.322 ***** 2025-11-01 14:18:38.115747 | orchestrator | changed: [testbed-manager] 2025-11-01 14:18:38.115757 | orchestrator | 2025-11-01 14:18:38.115768 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-11-01 14:18:38.115778 | orchestrator | Saturday 01 November 2025 14:16:13 +0000 (0:00:00.971) 0:00:05.294 ***** 2025-11-01 14:18:38.115789 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-11-01 14:18:38.115800 | orchestrator | ok: [testbed-manager] 2025-11-01 14:18:38.115810 | orchestrator | 2025-11-01 14:18:38.115821 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-11-01 14:18:38.115831 | orchestrator | Saturday 01 November 2025 14:16:57 +0000 (0:00:43.537) 0:00:48.832 ***** 2025-11-01 14:18:38.115842 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-11-01 14:18:38.115876 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-11-01 14:18:38.115887 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-11-01 14:18:38.115898 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-11-01 14:18:38.115908 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-11-01 14:18:38.115919 | orchestrator | 2025-11-01 14:18:38.115929 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-11-01 14:18:38.115940 | orchestrator | Saturday 01 November 2025 14:17:01 +0000 (0:00:04.363) 0:00:53.195 ***** 2025-11-01 14:18:38.115950 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-11-01 14:18:38.115961 | orchestrator | 2025-11-01 14:18:38.115972 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-11-01 14:18:38.115983 | orchestrator | Saturday 01 November 2025 14:17:01 +0000 (0:00:00.519) 0:00:53.714 ***** 2025-11-01 14:18:38.115993 | orchestrator | skipping: [testbed-manager] 2025-11-01 14:18:38.116004 | orchestrator | 2025-11-01 14:18:38.116014 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-11-01 14:18:38.116025 | orchestrator | Saturday 01 November 2025 14:17:02 +0000 (0:00:00.145) 0:00:53.859 ***** 2025-11-01 14:18:38.116035 | orchestrator | skipping: [testbed-manager] 2025-11-01 14:18:38.116046 | orchestrator | 2025-11-01 14:18:38.116070 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-11-01 14:18:38.116081 | orchestrator | Saturday 01 November 2025 14:17:02 +0000 (0:00:00.541) 0:00:54.401 ***** 2025-11-01 14:18:38.116092 | orchestrator | changed: [testbed-manager] 2025-11-01 14:18:38.116102 | orchestrator | 2025-11-01 14:18:38.116113 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-11-01 14:18:38.116123 | orchestrator | Saturday 01 November 2025 14:17:04 +0000 (0:00:01.637) 0:00:56.038 ***** 2025-11-01 14:18:38.116134 | orchestrator | changed: [testbed-manager] 2025-11-01 14:18:38.116144 | orchestrator | 2025-11-01 14:18:38.116155 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-11-01 14:18:38.116165 | orchestrator | Saturday 01 November 2025 14:17:05 +0000 (0:00:01.249) 0:00:57.287 ***** 2025-11-01 14:18:38.116176 | orchestrator | changed: [testbed-manager] 2025-11-01 14:18:38.116186 | orchestrator | 2025-11-01 14:18:38.116197 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-11-01 14:18:38.116207 | orchestrator | Saturday 01 November 2025 14:17:07 +0000 (0:00:01.986) 0:00:59.274 ***** 2025-11-01 14:18:38.116218 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-11-01 14:18:38.116228 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-11-01 14:18:38.116239 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-11-01 14:18:38.116249 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-11-01 14:18:38.116260 | orchestrator | 2025-11-01 14:18:38.116270 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:18:38.116281 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 14:18:38.116293 | orchestrator | 2025-11-01 14:18:38.116303 | orchestrator | 2025-11-01 14:18:38.116329 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:18:38.116340 | orchestrator | Saturday 01 November 2025 14:17:09 +0000 (0:00:01.924) 0:01:01.198 ***** 2025-11-01 14:18:38.116352 | orchestrator | =============================================================================== 2025-11-01 14:18:38.116363 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 43.54s 2025-11-01 14:18:38.116373 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.36s 2025-11-01 14:18:38.116384 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 1.99s 2025-11-01 14:18:38.116395 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.92s 2025-11-01 14:18:38.116405 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.64s 2025-11-01 14:18:38.116424 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.56s 2025-11-01 14:18:38.116435 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.32s 2025-11-01 14:18:38.116445 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 1.25s 2025-11-01 14:18:38.116456 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.97s 2025-11-01 14:18:38.116467 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.95s 2025-11-01 14:18:38.116477 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.54s 2025-11-01 14:18:38.116488 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.52s 2025-11-01 14:18:38.116498 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2025-11-01 14:18:38.116526 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2025-11-01 14:18:38.116537 | orchestrator | 2025-11-01 14:18:38.116548 | orchestrator | 2025-11-01 14:18:38.116559 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-11-01 14:18:38.116569 | orchestrator | 2025-11-01 14:18:38.116580 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-11-01 14:18:38.116591 | orchestrator | Saturday 01 November 2025 14:17:14 +0000 (0:00:00.273) 0:00:00.273 ***** 2025-11-01 14:18:38.116601 | orchestrator | changed: [testbed-manager] 2025-11-01 14:18:38.116612 | orchestrator | 2025-11-01 14:18:38.116622 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-11-01 14:18:38.116633 | orchestrator | Saturday 01 November 2025 14:17:16 +0000 (0:00:01.674) 0:00:01.947 ***** 2025-11-01 14:18:38.116644 | orchestrator | changed: [testbed-manager] 2025-11-01 14:18:38.116654 | orchestrator | 2025-11-01 14:18:38.116665 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-11-01 14:18:38.116675 | orchestrator | Saturday 01 November 2025 14:17:17 +0000 (0:00:01.233) 0:00:03.180 ***** 2025-11-01 14:18:38.116686 | orchestrator | changed: [testbed-manager] 2025-11-01 14:18:38.116696 | orchestrator | 2025-11-01 14:18:38.116707 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-11-01 14:18:38.116717 | orchestrator | Saturday 01 November 2025 14:17:18 +0000 (0:00:01.169) 0:00:04.350 ***** 2025-11-01 14:18:38.116728 | orchestrator | changed: [testbed-manager] 2025-11-01 14:18:38.116738 | orchestrator | 2025-11-01 14:18:38.116749 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-11-01 14:18:38.116760 | orchestrator | Saturday 01 November 2025 14:17:20 +0000 (0:00:01.294) 0:00:05.644 ***** 2025-11-01 14:18:38.116770 | orchestrator | changed: [testbed-manager] 2025-11-01 14:18:38.116781 | orchestrator | 2025-11-01 14:18:38.116791 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-11-01 14:18:38.116802 | orchestrator | Saturday 01 November 2025 14:17:21 +0000 (0:00:01.179) 0:00:06.823 ***** 2025-11-01 14:18:38.116812 | orchestrator | changed: [testbed-manager] 2025-11-01 14:18:38.116823 | orchestrator | 2025-11-01 14:18:38.116833 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-11-01 14:18:38.116844 | orchestrator | Saturday 01 November 2025 14:17:22 +0000 (0:00:01.114) 0:00:07.937 ***** 2025-11-01 14:18:38.116855 | orchestrator | changed: [testbed-manager] 2025-11-01 14:18:38.116865 | orchestrator | 2025-11-01 14:18:38.116881 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-11-01 14:18:38.116892 | orchestrator | Saturday 01 November 2025 14:17:24 +0000 (0:00:02.068) 0:00:10.006 ***** 2025-11-01 14:18:38.116903 | orchestrator | changed: [testbed-manager] 2025-11-01 14:18:38.116914 | orchestrator | 2025-11-01 14:18:38.116924 | orchestrator | TASK [Create admin user] ******************************************************* 2025-11-01 14:18:38.116935 | orchestrator | Saturday 01 November 2025 14:17:25 +0000 (0:00:01.322) 0:00:11.329 ***** 2025-11-01 14:18:38.116946 | orchestrator | changed: [testbed-manager] 2025-11-01 14:18:38.116991 | orchestrator | 2025-11-01 14:18:38.117015 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-11-01 14:18:38.117033 | orchestrator | Saturday 01 November 2025 14:18:12 +0000 (0:00:46.817) 0:00:58.147 ***** 2025-11-01 14:18:38.117044 | orchestrator | skipping: [testbed-manager] 2025-11-01 14:18:38.117055 | orchestrator | 2025-11-01 14:18:38.117065 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-11-01 14:18:38.117076 | orchestrator | 2025-11-01 14:18:38.117086 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-11-01 14:18:38.117097 | orchestrator | Saturday 01 November 2025 14:18:12 +0000 (0:00:00.156) 0:00:58.303 ***** 2025-11-01 14:18:38.117107 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:18:38.117118 | orchestrator | 2025-11-01 14:18:38.117129 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-11-01 14:18:38.117139 | orchestrator | 2025-11-01 14:18:38.117150 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-11-01 14:18:38.117160 | orchestrator | Saturday 01 November 2025 14:18:24 +0000 (0:00:11.685) 0:01:09.989 ***** 2025-11-01 14:18:38.117171 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:18:38.117182 | orchestrator | 2025-11-01 14:18:38.117192 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-11-01 14:18:38.117203 | orchestrator | 2025-11-01 14:18:38.117220 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-11-01 14:18:38.117231 | orchestrator | Saturday 01 November 2025 14:18:25 +0000 (0:00:01.647) 0:01:11.636 ***** 2025-11-01 14:18:38.117242 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:18:38.117252 | orchestrator | 2025-11-01 14:18:38.117263 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:18:38.117274 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 14:18:38.117285 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:18:38.117296 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:18:38.117306 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:18:38.117317 | orchestrator | 2025-11-01 14:18:38.117328 | orchestrator | 2025-11-01 14:18:38.117338 | orchestrator | 2025-11-01 14:18:38.117349 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:18:38.117360 | orchestrator | Saturday 01 November 2025 14:18:37 +0000 (0:00:11.210) 0:01:22.847 ***** 2025-11-01 14:18:38.117370 | orchestrator | =============================================================================== 2025-11-01 14:18:38.117381 | orchestrator | Create admin user ------------------------------------------------------ 46.82s 2025-11-01 14:18:38.117392 | orchestrator | Restart ceph manager service ------------------------------------------- 24.54s 2025-11-01 14:18:38.117402 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.07s 2025-11-01 14:18:38.117413 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.67s 2025-11-01 14:18:38.117423 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.32s 2025-11-01 14:18:38.117434 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.29s 2025-11-01 14:18:38.117445 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.23s 2025-11-01 14:18:38.117455 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.18s 2025-11-01 14:18:38.117466 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.17s 2025-11-01 14:18:38.117476 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.11s 2025-11-01 14:18:38.117487 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.16s 2025-11-01 14:18:38.117504 | orchestrator | 2025-11-01 14:18:38 | INFO  | Task f87b06e5-800b-48c6-9b98-d93d16c0257c is in state SUCCESS 2025-11-01 14:18:38.117636 | orchestrator | 2025-11-01 14:18:38 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:18:38.117652 | orchestrator | 2025-11-01 14:18:38 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:18:38.118116 | orchestrator | 2025-11-01 14:18:38 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:18:38.119231 | orchestrator | 2025-11-01 14:18:38 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:18:38.119250 | orchestrator | 2025-11-01 14:18:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:18:41.166854 | orchestrator | 2025-11-01 14:18:41 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:18:41.168439 | orchestrator | 2025-11-01 14:18:41 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:18:41.169304 | orchestrator | 2025-11-01 14:18:41 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:18:41.170151 | orchestrator | 2025-11-01 14:18:41 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:18:41.170174 | orchestrator | 2025-11-01 14:18:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:18:44.228658 | orchestrator | 2025-11-01 14:18:44 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:18:44.229984 | orchestrator | 2025-11-01 14:18:44 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:18:44.231368 | orchestrator | 2025-11-01 14:18:44 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:18:44.232975 | orchestrator | 2025-11-01 14:18:44 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:18:44.232999 | orchestrator | 2025-11-01 14:18:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:18:47.266718 | orchestrator | 2025-11-01 14:18:47 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:18:47.267318 | orchestrator | 2025-11-01 14:18:47 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:18:47.268660 | orchestrator | 2025-11-01 14:18:47 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:18:47.270242 | orchestrator | 2025-11-01 14:18:47 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:18:47.270267 | orchestrator | 2025-11-01 14:18:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:18:50.313877 | orchestrator | 2025-11-01 14:18:50 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:18:50.314394 | orchestrator | 2025-11-01 14:18:50 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:18:50.316694 | orchestrator | 2025-11-01 14:18:50 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:18:50.317661 | orchestrator | 2025-11-01 14:18:50 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:18:50.317704 | orchestrator | 2025-11-01 14:18:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:18:53.353177 | orchestrator | 2025-11-01 14:18:53 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:18:53.353817 | orchestrator | 2025-11-01 14:18:53 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:18:53.354848 | orchestrator | 2025-11-01 14:18:53 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:18:53.355904 | orchestrator | 2025-11-01 14:18:53 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:18:53.355929 | orchestrator | 2025-11-01 14:18:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:18:56.399994 | orchestrator | 2025-11-01 14:18:56 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:18:56.400050 | orchestrator | 2025-11-01 14:18:56 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:18:56.400056 | orchestrator | 2025-11-01 14:18:56 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:18:56.400061 | orchestrator | 2025-11-01 14:18:56 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:18:56.400066 | orchestrator | 2025-11-01 14:18:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:18:59.434070 | orchestrator | 2025-11-01 14:18:59 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:18:59.434778 | orchestrator | 2025-11-01 14:18:59 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:18:59.435938 | orchestrator | 2025-11-01 14:18:59 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:18:59.436760 | orchestrator | 2025-11-01 14:18:59 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:18:59.436871 | orchestrator | 2025-11-01 14:18:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:19:02.480289 | orchestrator | 2025-11-01 14:19:02 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:19:02.480648 | orchestrator | 2025-11-01 14:19:02 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:19:02.481380 | orchestrator | 2025-11-01 14:19:02 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:19:02.482146 | orchestrator | 2025-11-01 14:19:02 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:19:02.482169 | orchestrator | 2025-11-01 14:19:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:19:05.507896 | orchestrator | 2025-11-01 14:19:05 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:19:05.508210 | orchestrator | 2025-11-01 14:19:05 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:19:05.509471 | orchestrator | 2025-11-01 14:19:05 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:19:05.510450 | orchestrator | 2025-11-01 14:19:05 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:19:05.510549 | orchestrator | 2025-11-01 14:19:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:19:08.544266 | orchestrator | 2025-11-01 14:19:08 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:19:08.548647 | orchestrator | 2025-11-01 14:19:08 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:19:08.550497 | orchestrator | 2025-11-01 14:19:08 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:19:08.552849 | orchestrator | 2025-11-01 14:19:08 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:19:08.552872 | orchestrator | 2025-11-01 14:19:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:19:11.588480 | orchestrator | 2025-11-01 14:19:11 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:19:11.591127 | orchestrator | 2025-11-01 14:19:11 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:19:11.592035 | orchestrator | 2025-11-01 14:19:11 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:19:11.593068 | orchestrator | 2025-11-01 14:19:11 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:19:11.593092 | orchestrator | 2025-11-01 14:19:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:19:14.621991 | orchestrator | 2025-11-01 14:19:14 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:19:14.622206 | orchestrator | 2025-11-01 14:19:14 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:19:14.622694 | orchestrator | 2025-11-01 14:19:14 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:19:14.623417 | orchestrator | 2025-11-01 14:19:14 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:19:14.623458 | orchestrator | 2025-11-01 14:19:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:19:17.652440 | orchestrator | 2025-11-01 14:19:17 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:19:17.657082 | orchestrator | 2025-11-01 14:19:17 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:19:17.659490 | orchestrator | 2025-11-01 14:19:17 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:19:17.661787 | orchestrator | 2025-11-01 14:19:17 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:19:17.662207 | orchestrator | 2025-11-01 14:19:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:19:20.700856 | orchestrator | 2025-11-01 14:19:20 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state STARTED 2025-11-01 14:19:20.701700 | orchestrator | 2025-11-01 14:19:20 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:19:20.702831 | orchestrator | 2025-11-01 14:19:20 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:19:20.703877 | orchestrator | 2025-11-01 14:19:20 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:19:20.704052 | orchestrator | 2025-11-01 14:19:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:19:23.746433 | orchestrator | 2025-11-01 14:19:23 | INFO  | Task e70d5da2-52c9-4eb2-acfd-801329b50649 is in state STARTED 2025-11-01 14:19:23.748063 | orchestrator | 2025-11-01 14:19:23 | INFO  | Task c834e809-922f-4a23-bf47-fad0934e643e is in state SUCCESS 2025-11-01 14:19:23.753438 | orchestrator | 2025-11-01 14:19:23.753474 | orchestrator | 2025-11-01 14:19:23.753503 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 14:19:23.753542 | orchestrator | 2025-11-01 14:19:23.753554 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 14:19:23.753565 | orchestrator | Saturday 01 November 2025 14:17:10 +0000 (0:00:00.389) 0:00:00.389 ***** 2025-11-01 14:19:23.753576 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:19:23.753589 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:19:23.753600 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:19:23.753611 | orchestrator | 2025-11-01 14:19:23.753622 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 14:19:23.753633 | orchestrator | Saturday 01 November 2025 14:17:10 +0000 (0:00:00.530) 0:00:00.920 ***** 2025-11-01 14:19:23.753737 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-11-01 14:19:23.753751 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-11-01 14:19:23.753785 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-11-01 14:19:23.753796 | orchestrator | 2025-11-01 14:19:23.753807 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-11-01 14:19:23.753818 | orchestrator | 2025-11-01 14:19:23.753828 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-11-01 14:19:23.753839 | orchestrator | Saturday 01 November 2025 14:17:11 +0000 (0:00:00.835) 0:00:01.756 ***** 2025-11-01 14:19:23.753850 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:19:23.753862 | orchestrator | 2025-11-01 14:19:23.753873 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-11-01 14:19:23.753883 | orchestrator | Saturday 01 November 2025 14:17:12 +0000 (0:00:00.841) 0:00:02.598 ***** 2025-11-01 14:19:23.753895 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-11-01 14:19:23.753906 | orchestrator | 2025-11-01 14:19:23.753916 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-11-01 14:19:23.753927 | orchestrator | Saturday 01 November 2025 14:17:16 +0000 (0:00:04.063) 0:00:06.661 ***** 2025-11-01 14:19:23.753937 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-11-01 14:19:23.753948 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-11-01 14:19:23.753959 | orchestrator | 2025-11-01 14:19:23.753970 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-11-01 14:19:23.753980 | orchestrator | Saturday 01 November 2025 14:17:23 +0000 (0:00:07.266) 0:00:13.928 ***** 2025-11-01 14:19:23.753991 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-11-01 14:19:23.754002 | orchestrator | 2025-11-01 14:19:23.754013 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-11-01 14:19:23.754077 | orchestrator | Saturday 01 November 2025 14:17:27 +0000 (0:00:04.052) 0:00:17.981 ***** 2025-11-01 14:19:23.754088 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-01 14:19:23.754099 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-11-01 14:19:23.754110 | orchestrator | 2025-11-01 14:19:23.754121 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-11-01 14:19:23.754131 | orchestrator | Saturday 01 November 2025 14:17:32 +0000 (0:00:04.208) 0:00:22.189 ***** 2025-11-01 14:19:23.754142 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-01 14:19:23.754153 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-11-01 14:19:23.754164 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-11-01 14:19:23.754175 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-11-01 14:19:23.754186 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-11-01 14:19:23.754196 | orchestrator | 2025-11-01 14:19:23.754207 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-11-01 14:19:23.754218 | orchestrator | Saturday 01 November 2025 14:17:47 +0000 (0:00:15.476) 0:00:37.666 ***** 2025-11-01 14:19:23.754229 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-11-01 14:19:23.754239 | orchestrator | 2025-11-01 14:19:23.754250 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-11-01 14:19:23.754261 | orchestrator | Saturday 01 November 2025 14:17:52 +0000 (0:00:04.606) 0:00:42.272 ***** 2025-11-01 14:19:23.754275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 14:19:23.754321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 14:19:23.754334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 14:19:23.754347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-01 14:19:23.754362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-01 14:19:23.754374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-01 14:19:23.754400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:19:23.754414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:19:23.754425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:19:23.754437 | orchestrator | 2025-11-01 14:19:23.754447 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-11-01 14:19:23.754459 | orchestrator | Saturday 01 November 2025 14:17:55 +0000 (0:00:02.886) 0:00:45.159 ***** 2025-11-01 14:19:23.754469 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-11-01 14:19:23.754480 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-11-01 14:19:23.754491 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-11-01 14:19:23.754501 | orchestrator | 2025-11-01 14:19:23.754532 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-11-01 14:19:23.754543 | orchestrator | Saturday 01 November 2025 14:17:56 +0000 (0:00:01.774) 0:00:46.934 ***** 2025-11-01 14:19:23.754554 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:19:23.754564 | orchestrator | 2025-11-01 14:19:23.754607 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-11-01 14:19:23.754620 | orchestrator | Saturday 01 November 2025 14:17:57 +0000 (0:00:00.241) 0:00:47.175 ***** 2025-11-01 14:19:23.754630 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:19:23.754641 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:19:23.754652 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:19:23.754662 | orchestrator | 2025-11-01 14:19:23.754673 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-11-01 14:19:23.754684 | orchestrator | Saturday 01 November 2025 14:17:57 +0000 (0:00:00.495) 0:00:47.670 ***** 2025-11-01 14:19:23.754694 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:19:23.754705 | orchestrator | 2025-11-01 14:19:23.754716 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-11-01 14:19:23.754727 | orchestrator | Saturday 01 November 2025 14:17:58 +0000 (0:00:00.765) 0:00:48.436 ***** 2025-11-01 14:19:23.754738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 14:19:23.754773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 14:19:23.754786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 14:19:23.754798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-01 14:19:23.754810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-01 14:19:23.754828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-01 14:19:23.754840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:19:23.754864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:19:23.754877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:19:23.754888 | orchestrator | 2025-11-01 14:19:23.754899 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-11-01 14:19:23.754910 | orchestrator | Saturday 01 November 2025 14:18:01 +0000 (0:00:03.670) 0:00:52.106 ***** 2025-11-01 14:19:23.754921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-01 14:19:23.754932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-01 14:19:23.754951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:19:23.754963 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:19:23.754986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-01 14:19:23.754998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-01 14:19:23.755010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:19:23.755021 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:19:23.755032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-01 14:19:23.755050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-01 14:19:23.755062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:19:23.755073 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:19:23.755084 | orchestrator | 2025-11-01 14:19:23.755095 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-11-01 14:19:23.755105 | orchestrator | Saturday 01 November 2025 14:18:03 +0000 (0:00:01.030) 0:00:53.137 ***** 2025-11-01 14:19:23.755136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-01 14:19:23.755149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-01 14:19:23.755160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:19:23.755178 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:19:23.755189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-01 14:19:23.755200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-01 14:19:23.755223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-01 14:19:23.755235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:19:23.755247 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:19:23.755258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-01 14:19:23.755276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:19:23.755287 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:19:23.755298 | orchestrator | 2025-11-01 14:19:23.755309 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-11-01 14:19:23.755320 | orchestrator | Saturday 01 November 2025 14:18:04 +0000 (0:00:01.084) 0:00:54.221 ***** 2025-11-01 14:19:23.755331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 14:19:23.755353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 14:19:23.755365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 14:19:23.755377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-01 14:19:23.755397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-01 14:19:23.755408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-01 14:19:23.755419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:19:23.755437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:19:23.755454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:19:23.755465 | orchestrator | 2025-11-01 14:19:23.755476 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-11-01 14:19:23.755487 | orchestrator | Saturday 01 November 2025 14:18:08 +0000 (0:00:04.158) 0:00:58.380 ***** 2025-11-01 14:19:23.755498 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:19:23.755539 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:19:23.755551 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:19:23.755561 | orchestrator | 2025-11-01 14:19:23.755572 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-11-01 14:19:23.755590 | orchestrator | Saturday 01 November 2025 14:18:11 +0000 (0:00:02.759) 0:01:01.140 ***** 2025-11-01 14:19:23.755601 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-01 14:19:23.755611 | orchestrator | 2025-11-01 14:19:23.755622 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-11-01 14:19:23.755633 | orchestrator | Saturday 01 November 2025 14:18:12 +0000 (0:00:01.344) 0:01:02.485 ***** 2025-11-01 14:19:23.755643 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:19:23.755654 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:19:23.755665 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:19:23.755675 | orchestrator | 2025-11-01 14:19:23.755686 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-11-01 14:19:23.755697 | orchestrator | Saturday 01 November 2025 14:18:13 +0000 (0:00:00.705) 0:01:03.190 ***** 2025-11-01 14:19:23.755708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 14:19:23.755720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 14:19:23.755745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 14:19:23.755757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-01 14:19:23.755777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-01 14:19:23.755788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-01 14:19:23.755800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:19:23.755811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:19:23.755822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:19:23.755833 | orchestrator | 2025-11-01 14:19:23.755845 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-11-01 14:19:23.755861 | orchestrator | Saturday 01 November 2025 14:18:25 +0000 (0:00:12.360) 0:01:15.551 ***** 2025-11-01 14:19:23.755877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-01 14:19:23.755896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-01 14:19:23.755908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:19:23.755919 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:19:23.755930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-01 14:19:23.755941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-01 14:19:23.755963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:19:23.755982 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:19:23.755993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-01 14:19:23.756005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-01 14:19:23.756016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:19:23.756027 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:19:23.756038 | orchestrator | 2025-11-01 14:19:23.756049 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-11-01 14:19:23.756060 | orchestrator | Saturday 01 November 2025 14:18:27 +0000 (0:00:01.878) 0:01:17.430 ***** 2025-11-01 14:19:23.756071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 14:19:23.756095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 14:19:23.756118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-01 14:19:23.756129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-01 14:19:23.756141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 14:19:23.756152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:19:23.756163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:19:23.756194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-01 14:19:23.756206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:19:23.756217 | orchestrator | 2025-11-01 14:19:23.756228 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-11-01 14:19:23.756239 | orchestrator | Saturday 01 November 2025 14:18:31 +0000 (0:00:04.489) 0:01:21.919 ***** 2025-11-01 14:19:23.756250 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:19:23.756261 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:19:23.756271 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:19:23.756282 | orchestrator | 2025-11-01 14:19:23.756293 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-11-01 14:19:23.756304 | orchestrator | Saturday 01 November 2025 14:18:32 +0000 (0:00:00.450) 0:01:22.370 ***** 2025-11-01 14:19:23.756314 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:19:23.756325 | orchestrator | 2025-11-01 14:19:23.756336 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-11-01 14:19:23.756347 | orchestrator | Saturday 01 November 2025 14:18:34 +0000 (0:00:02.072) 0:01:24.442 ***** 2025-11-01 14:19:23.756358 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:19:23.756368 | orchestrator | 2025-11-01 14:19:23.756379 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-11-01 14:19:23.756390 | orchestrator | Saturday 01 November 2025 14:18:36 +0000 (0:00:02.460) 0:01:26.903 ***** 2025-11-01 14:19:23.756400 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:19:23.756411 | orchestrator | 2025-11-01 14:19:23.756422 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-11-01 14:19:23.756432 | orchestrator | Saturday 01 November 2025 14:18:50 +0000 (0:00:13.450) 0:01:40.354 ***** 2025-11-01 14:19:23.756443 | orchestrator | 2025-11-01 14:19:23.756454 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-11-01 14:19:23.756464 | orchestrator | Saturday 01 November 2025 14:18:50 +0000 (0:00:00.156) 0:01:40.510 ***** 2025-11-01 14:19:23.756475 | orchestrator | 2025-11-01 14:19:23.756486 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-11-01 14:19:23.756497 | orchestrator | Saturday 01 November 2025 14:18:50 +0000 (0:00:00.215) 0:01:40.726 ***** 2025-11-01 14:19:23.756563 | orchestrator | 2025-11-01 14:19:23.756576 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-11-01 14:19:23.756587 | orchestrator | Saturday 01 November 2025 14:18:50 +0000 (0:00:00.169) 0:01:40.896 ***** 2025-11-01 14:19:23.756597 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:19:23.756608 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:19:23.756619 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:19:23.756629 | orchestrator | 2025-11-01 14:19:23.756640 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-11-01 14:19:23.756658 | orchestrator | Saturday 01 November 2025 14:19:01 +0000 (0:00:10.809) 0:01:51.706 ***** 2025-11-01 14:19:23.756669 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:19:23.756679 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:19:23.756690 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:19:23.756701 | orchestrator | 2025-11-01 14:19:23.756711 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-11-01 14:19:23.756722 | orchestrator | Saturday 01 November 2025 14:19:10 +0000 (0:00:09.101) 0:02:00.807 ***** 2025-11-01 14:19:23.756733 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:19:23.756743 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:19:23.756754 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:19:23.756764 | orchestrator | 2025-11-01 14:19:23.756775 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:19:23.756787 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-01 14:19:23.756799 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 14:19:23.756810 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 14:19:23.756821 | orchestrator | 2025-11-01 14:19:23.756832 | orchestrator | 2025-11-01 14:19:23.756843 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:19:23.756854 | orchestrator | Saturday 01 November 2025 14:19:20 +0000 (0:00:10.263) 0:02:11.070 ***** 2025-11-01 14:19:23.756865 | orchestrator | =============================================================================== 2025-11-01 14:19:23.756876 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.48s 2025-11-01 14:19:23.756892 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 13.45s 2025-11-01 14:19:23.756909 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 12.36s 2025-11-01 14:19:23.756921 | orchestrator | barbican : Restart barbican-api container ------------------------------ 10.81s 2025-11-01 14:19:23.756931 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.26s 2025-11-01 14:19:23.756940 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 9.10s 2025-11-01 14:19:23.756950 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.27s 2025-11-01 14:19:23.756959 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.61s 2025-11-01 14:19:23.756969 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.49s 2025-11-01 14:19:23.756978 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.21s 2025-11-01 14:19:23.756987 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.16s 2025-11-01 14:19:23.756997 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 4.06s 2025-11-01 14:19:23.757006 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 4.05s 2025-11-01 14:19:23.757016 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.67s 2025-11-01 14:19:23.757025 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.89s 2025-11-01 14:19:23.757035 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.76s 2025-11-01 14:19:23.757044 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.46s 2025-11-01 14:19:23.757053 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.07s 2025-11-01 14:19:23.757063 | orchestrator | barbican : Copying over existing policy file ---------------------------- 1.88s 2025-11-01 14:19:23.757073 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.77s 2025-11-01 14:19:23.757828 | orchestrator | 2025-11-01 14:19:23 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:19:23.759650 | orchestrator | 2025-11-01 14:19:23 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:19:23.760333 | orchestrator | 2025-11-01 14:19:23 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:19:23.760357 | orchestrator | 2025-11-01 14:19:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:19:26.796559 | orchestrator | 2025-11-01 14:19:26 | INFO  | Task e70d5da2-52c9-4eb2-acfd-801329b50649 is in state STARTED 2025-11-01 14:19:26.797217 | orchestrator | 2025-11-01 14:19:26 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:19:26.798091 | orchestrator | 2025-11-01 14:19:26 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:19:26.798985 | orchestrator | 2025-11-01 14:19:26 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:19:26.799004 | orchestrator | 2025-11-01 14:19:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:19:29.836134 | orchestrator | 2025-11-01 14:19:29 | INFO  | Task e70d5da2-52c9-4eb2-acfd-801329b50649 is in state STARTED 2025-11-01 14:19:29.836205 | orchestrator | 2025-11-01 14:19:29 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:19:29.837367 | orchestrator | 2025-11-01 14:19:29 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:19:29.838653 | orchestrator | 2025-11-01 14:19:29 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:19:29.838672 | orchestrator | 2025-11-01 14:19:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:19:32.883211 | orchestrator | 2025-11-01 14:19:32 | INFO  | Task e70d5da2-52c9-4eb2-acfd-801329b50649 is in state STARTED 2025-11-01 14:19:32.884053 | orchestrator | 2025-11-01 14:19:32 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:19:32.885952 | orchestrator | 2025-11-01 14:19:32 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:19:32.887052 | orchestrator | 2025-11-01 14:19:32 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:19:32.887839 | orchestrator | 2025-11-01 14:19:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:19:35.924015 | orchestrator | 2025-11-01 14:19:35 | INFO  | Task e70d5da2-52c9-4eb2-acfd-801329b50649 is in state STARTED 2025-11-01 14:19:35.924842 | orchestrator | 2025-11-01 14:19:35 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:19:35.925734 | orchestrator | 2025-11-01 14:19:35 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:19:35.926758 | orchestrator | 2025-11-01 14:19:35 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:19:35.926793 | orchestrator | 2025-11-01 14:19:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:19:38.971215 | orchestrator | 2025-11-01 14:19:38 | INFO  | Task e70d5da2-52c9-4eb2-acfd-801329b50649 is in state STARTED 2025-11-01 14:19:38.974395 | orchestrator | 2025-11-01 14:19:38 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:19:38.977454 | orchestrator | 2025-11-01 14:19:38 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:19:38.979946 | orchestrator | 2025-11-01 14:19:38 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:19:38.980588 | orchestrator | 2025-11-01 14:19:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:19:42.013919 | orchestrator | 2025-11-01 14:19:42 | INFO  | Task e70d5da2-52c9-4eb2-acfd-801329b50649 is in state STARTED 2025-11-01 14:19:42.014494 | orchestrator | 2025-11-01 14:19:42 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:19:42.015617 | orchestrator | 2025-11-01 14:19:42 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:19:42.016443 | orchestrator | 2025-11-01 14:19:42 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:19:42.016703 | orchestrator | 2025-11-01 14:19:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:19:45.071428 | orchestrator | 2025-11-01 14:19:45 | INFO  | Task e70d5da2-52c9-4eb2-acfd-801329b50649 is in state STARTED 2025-11-01 14:19:45.072225 | orchestrator | 2025-11-01 14:19:45 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:19:45.073269 | orchestrator | 2025-11-01 14:19:45 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:19:45.074375 | orchestrator | 2025-11-01 14:19:45 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:19:45.074691 | orchestrator | 2025-11-01 14:19:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:19:48.125495 | orchestrator | 2025-11-01 14:19:48 | INFO  | Task e70d5da2-52c9-4eb2-acfd-801329b50649 is in state STARTED 2025-11-01 14:19:48.128126 | orchestrator | 2025-11-01 14:19:48 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:19:48.132266 | orchestrator | 2025-11-01 14:19:48 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:19:48.134709 | orchestrator | 2025-11-01 14:19:48 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:19:48.135059 | orchestrator | 2025-11-01 14:19:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:19:51.187884 | orchestrator | 2025-11-01 14:19:51 | INFO  | Task e70d5da2-52c9-4eb2-acfd-801329b50649 is in state STARTED 2025-11-01 14:19:51.190993 | orchestrator | 2025-11-01 14:19:51 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:19:51.192272 | orchestrator | 2025-11-01 14:19:51 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:19:51.193924 | orchestrator | 2025-11-01 14:19:51 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:19:51.193943 | orchestrator | 2025-11-01 14:19:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:19:54.247970 | orchestrator | 2025-11-01 14:19:54 | INFO  | Task e70d5da2-52c9-4eb2-acfd-801329b50649 is in state STARTED 2025-11-01 14:19:54.250182 | orchestrator | 2025-11-01 14:19:54 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:19:54.251392 | orchestrator | 2025-11-01 14:19:54 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:19:54.253185 | orchestrator | 2025-11-01 14:19:54 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:19:54.253204 | orchestrator | 2025-11-01 14:19:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:19:57.315705 | orchestrator | 2025-11-01 14:19:57 | INFO  | Task e70d5da2-52c9-4eb2-acfd-801329b50649 is in state STARTED 2025-11-01 14:19:57.322546 | orchestrator | 2025-11-01 14:19:57 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:19:57.325839 | orchestrator | 2025-11-01 14:19:57 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:19:57.328720 | orchestrator | 2025-11-01 14:19:57 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:19:57.329193 | orchestrator | 2025-11-01 14:19:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:20:00.379961 | orchestrator | 2025-11-01 14:20:00 | INFO  | Task e70d5da2-52c9-4eb2-acfd-801329b50649 is in state STARTED 2025-11-01 14:20:00.381261 | orchestrator | 2025-11-01 14:20:00 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:20:00.384530 | orchestrator | 2025-11-01 14:20:00 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:20:00.387128 | orchestrator | 2025-11-01 14:20:00 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:20:00.387150 | orchestrator | 2025-11-01 14:20:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:20:03.425775 | orchestrator | 2025-11-01 14:20:03 | INFO  | Task e70d5da2-52c9-4eb2-acfd-801329b50649 is in state STARTED 2025-11-01 14:20:03.427622 | orchestrator | 2025-11-01 14:20:03 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:20:03.430239 | orchestrator | 2025-11-01 14:20:03 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:20:03.431963 | orchestrator | 2025-11-01 14:20:03 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:20:03.431991 | orchestrator | 2025-11-01 14:20:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:20:06.475254 | orchestrator | 2025-11-01 14:20:06 | INFO  | Task e70d5da2-52c9-4eb2-acfd-801329b50649 is in state STARTED 2025-11-01 14:20:06.476356 | orchestrator | 2025-11-01 14:20:06 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:20:06.476381 | orchestrator | 2025-11-01 14:20:06 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:20:06.477008 | orchestrator | 2025-11-01 14:20:06 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:20:06.477026 | orchestrator | 2025-11-01 14:20:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:20:09.512077 | orchestrator | 2025-11-01 14:20:09 | INFO  | Task e70d5da2-52c9-4eb2-acfd-801329b50649 is in state STARTED 2025-11-01 14:20:09.514907 | orchestrator | 2025-11-01 14:20:09 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:20:09.517868 | orchestrator | 2025-11-01 14:20:09 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:20:09.520489 | orchestrator | 2025-11-01 14:20:09 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:20:09.520539 | orchestrator | 2025-11-01 14:20:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:20:12.577569 | orchestrator | 2025-11-01 14:20:12 | INFO  | Task e70d5da2-52c9-4eb2-acfd-801329b50649 is in state STARTED 2025-11-01 14:20:12.581593 | orchestrator | 2025-11-01 14:20:12 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:20:12.584696 | orchestrator | 2025-11-01 14:20:12 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:20:12.586781 | orchestrator | 2025-11-01 14:20:12 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:20:12.587379 | orchestrator | 2025-11-01 14:20:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:20:15.638058 | orchestrator | 2025-11-01 14:20:15 | INFO  | Task e70d5da2-52c9-4eb2-acfd-801329b50649 is in state STARTED 2025-11-01 14:20:15.638798 | orchestrator | 2025-11-01 14:20:15 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:20:15.640962 | orchestrator | 2025-11-01 14:20:15 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:20:15.642099 | orchestrator | 2025-11-01 14:20:15 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:20:15.642120 | orchestrator | 2025-11-01 14:20:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:20:18.750370 | orchestrator | 2025-11-01 14:20:18 | INFO  | Task e70d5da2-52c9-4eb2-acfd-801329b50649 is in state STARTED 2025-11-01 14:20:18.752600 | orchestrator | 2025-11-01 14:20:18 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:20:18.756136 | orchestrator | 2025-11-01 14:20:18 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:20:18.757801 | orchestrator | 2025-11-01 14:20:18 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:20:18.757982 | orchestrator | 2025-11-01 14:20:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:20:21.805237 | orchestrator | 2025-11-01 14:20:21 | INFO  | Task e70d5da2-52c9-4eb2-acfd-801329b50649 is in state STARTED 2025-11-01 14:20:21.807303 | orchestrator | 2025-11-01 14:20:21 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:20:21.809608 | orchestrator | 2025-11-01 14:20:21 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:20:21.811840 | orchestrator | 2025-11-01 14:20:21 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:20:21.812099 | orchestrator | 2025-11-01 14:20:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:20:24.874331 | orchestrator | 2025-11-01 14:20:24 | INFO  | Task e70d5da2-52c9-4eb2-acfd-801329b50649 is in state STARTED 2025-11-01 14:20:24.877125 | orchestrator | 2025-11-01 14:20:24 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:20:24.879996 | orchestrator | 2025-11-01 14:20:24 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:20:24.882062 | orchestrator | 2025-11-01 14:20:24 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:20:24.882642 | orchestrator | 2025-11-01 14:20:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:20:27.926857 | orchestrator | 2025-11-01 14:20:27 | INFO  | Task e70d5da2-52c9-4eb2-acfd-801329b50649 is in state STARTED 2025-11-01 14:20:27.928841 | orchestrator | 2025-11-01 14:20:27 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:20:27.930090 | orchestrator | 2025-11-01 14:20:27 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:20:27.932718 | orchestrator | 2025-11-01 14:20:27 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:20:27.932741 | orchestrator | 2025-11-01 14:20:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:20:30.982582 | orchestrator | 2025-11-01 14:20:30 | INFO  | Task e70d5da2-52c9-4eb2-acfd-801329b50649 is in state STARTED 2025-11-01 14:20:30.983495 | orchestrator | 2025-11-01 14:20:30 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:20:30.985216 | orchestrator | 2025-11-01 14:20:30 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:20:30.987638 | orchestrator | 2025-11-01 14:20:30 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:20:30.987663 | orchestrator | 2025-11-01 14:20:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:20:34.050097 | orchestrator | 2025-11-01 14:20:34 | INFO  | Task e70d5da2-52c9-4eb2-acfd-801329b50649 is in state STARTED 2025-11-01 14:20:34.050771 | orchestrator | 2025-11-01 14:20:34 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:20:34.053034 | orchestrator | 2025-11-01 14:20:34 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:20:34.055629 | orchestrator | 2025-11-01 14:20:34 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:20:34.055769 | orchestrator | 2025-11-01 14:20:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:20:37.104228 | orchestrator | 2025-11-01 14:20:37 | INFO  | Task e70d5da2-52c9-4eb2-acfd-801329b50649 is in state STARTED 2025-11-01 14:20:37.105178 | orchestrator | 2025-11-01 14:20:37 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:20:37.106331 | orchestrator | 2025-11-01 14:20:37 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state STARTED 2025-11-01 14:20:37.107236 | orchestrator | 2025-11-01 14:20:37 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:20:37.107256 | orchestrator | 2025-11-01 14:20:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:20:40.149052 | orchestrator | 2025-11-01 14:20:40 | INFO  | Task e70d5da2-52c9-4eb2-acfd-801329b50649 is in state STARTED 2025-11-01 14:20:40.150301 | orchestrator | 2025-11-01 14:20:40 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:20:40.152291 | orchestrator | 2025-11-01 14:20:40 | INFO  | Task 21d7f05b-733d-47db-b3b4-ef8fb4ed01b7 is in state SUCCESS 2025-11-01 14:20:40.154237 | orchestrator | 2025-11-01 14:20:40.154271 | orchestrator | 2025-11-01 14:20:40.154283 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 14:20:40.154295 | orchestrator | 2025-11-01 14:20:40.154307 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 14:20:40.154319 | orchestrator | Saturday 01 November 2025 14:17:10 +0000 (0:00:00.575) 0:00:00.575 ***** 2025-11-01 14:20:40.154331 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:20:40.154344 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:20:40.154355 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:20:40.154366 | orchestrator | 2025-11-01 14:20:40.154392 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 14:20:40.154404 | orchestrator | Saturday 01 November 2025 14:17:10 +0000 (0:00:00.511) 0:00:01.086 ***** 2025-11-01 14:20:40.154451 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-11-01 14:20:40.154463 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-11-01 14:20:40.154474 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-11-01 14:20:40.154484 | orchestrator | 2025-11-01 14:20:40.154495 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-11-01 14:20:40.154560 | orchestrator | 2025-11-01 14:20:40.154573 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-11-01 14:20:40.154584 | orchestrator | Saturday 01 November 2025 14:17:11 +0000 (0:00:00.653) 0:00:01.740 ***** 2025-11-01 14:20:40.154595 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:20:40.154606 | orchestrator | 2025-11-01 14:20:40.154617 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-11-01 14:20:40.154628 | orchestrator | Saturday 01 November 2025 14:17:12 +0000 (0:00:00.697) 0:00:02.438 ***** 2025-11-01 14:20:40.154639 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-11-01 14:20:40.154649 | orchestrator | 2025-11-01 14:20:40.154660 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-11-01 14:20:40.154694 | orchestrator | Saturday 01 November 2025 14:17:16 +0000 (0:00:03.963) 0:00:06.401 ***** 2025-11-01 14:20:40.154824 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-11-01 14:20:40.154838 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-11-01 14:20:40.154851 | orchestrator | 2025-11-01 14:20:40.154863 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-11-01 14:20:40.154875 | orchestrator | Saturday 01 November 2025 14:17:23 +0000 (0:00:07.753) 0:00:14.154 ***** 2025-11-01 14:20:40.154888 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating projects (5 retries left). 2025-11-01 14:20:40.154900 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-01 14:20:40.154912 | orchestrator | 2025-11-01 14:20:40.154924 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-11-01 14:20:40.154936 | orchestrator | Saturday 01 November 2025 14:17:40 +0000 (0:00:16.420) 0:00:30.575 ***** 2025-11-01 14:20:40.154948 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-01 14:20:40.154960 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-11-01 14:20:40.154972 | orchestrator | 2025-11-01 14:20:40.154984 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-11-01 14:20:40.154996 | orchestrator | Saturday 01 November 2025 14:17:44 +0000 (0:00:03.931) 0:00:34.506 ***** 2025-11-01 14:20:40.155009 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-01 14:20:40.155022 | orchestrator | 2025-11-01 14:20:40.155034 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-11-01 14:20:40.155046 | orchestrator | Saturday 01 November 2025 14:17:47 +0000 (0:00:03.414) 0:00:37.921 ***** 2025-11-01 14:20:40.155057 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-11-01 14:20:40.155069 | orchestrator | 2025-11-01 14:20:40.155081 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-11-01 14:20:40.155094 | orchestrator | Saturday 01 November 2025 14:17:51 +0000 (0:00:04.220) 0:00:42.142 ***** 2025-11-01 14:20:40.155110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 14:20:40.155150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 14:20:40.155164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 14:20:40.155186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 14:20:40.155200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 14:20:40.155212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 14:20:40.155224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.155248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.155266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.155278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.155290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.155302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.155313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.155409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.155435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.155456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.155467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.155478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.155490 | orchestrator | 2025-11-01 14:20:40.155501 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-11-01 14:20:40.155532 | orchestrator | Saturday 01 November 2025 14:17:56 +0000 (0:00:04.369) 0:00:46.512 ***** 2025-11-01 14:20:40.155544 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:20:40.155554 | orchestrator | 2025-11-01 14:20:40.155565 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-11-01 14:20:40.155576 | orchestrator | Saturday 01 November 2025 14:17:56 +0000 (0:00:00.271) 0:00:46.783 ***** 2025-11-01 14:20:40.155586 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:20:40.155597 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:20:40.155608 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:20:40.155618 | orchestrator | 2025-11-01 14:20:40.155629 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-11-01 14:20:40.155639 | orchestrator | Saturday 01 November 2025 14:17:56 +0000 (0:00:00.521) 0:00:47.305 ***** 2025-11-01 14:20:40.155650 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:20:40.155661 | orchestrator | 2025-11-01 14:20:40.155671 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-11-01 14:20:40.155682 | orchestrator | Saturday 01 November 2025 14:17:57 +0000 (0:00:00.721) 0:00:48.026 ***** 2025-11-01 14:20:40.155791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 14:20:40.155824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 14:20:40.155837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 14:20:40.155848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 14:20:40.155860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 14:20:40.155871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 14:20:40.155896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.155912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.155924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.155935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.155946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.155958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.155969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.156004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.156021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.156033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.156044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.156055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.156066 | orchestrator | 2025-11-01 14:20:40.156077 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-11-01 14:20:40.156088 | orchestrator | Saturday 01 November 2025 14:18:04 +0000 (0:00:06.569) 0:00:54.595 ***** 2025-11-01 14:20:40.156128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 14:20:40.156154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-01 14:20:40.156171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.156183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 14:20:40.156194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-01 14:20:40.156206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.156217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.156234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.156903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.156927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.156938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.156950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.156961 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:20:40.156972 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:20:40.156984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 14:20:40.157004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-01 14:20:40.157025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.157042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.157053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.157065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.157076 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:20:40.157086 | orchestrator | 2025-11-01 14:20:40.157097 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-11-01 14:20:40.157108 | orchestrator | Saturday 01 November 2025 14:18:05 +0000 (0:00:01.540) 0:00:56.135 ***** 2025-11-01 14:20:40.157119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 14:20:40.157136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-01 14:20:40.157154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.157171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.157182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.157194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.157205 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:20:40.157216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 14:20:40.157234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 14:20:40.157257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-01 14:20:40.157269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-01 14:20:40.157281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.157292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.157309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.157320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.157332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.157353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.157365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.157376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.157387 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:20:40.157398 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:20:40.157409 | orchestrator | 2025-11-01 14:20:40.157425 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-11-01 14:20:40.157437 | orchestrator | Saturday 01 November 2025 14:18:07 +0000 (0:00:01.265) 0:00:57.401 ***** 2025-11-01 14:20:40.157448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 14:20:40.157460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 14:20:40.157486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 14:20:40.157501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 14:20:40.157565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 14:20:40.157586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 14:20:40.157599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.157612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.157630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.157649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.157662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.157674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.157693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.157706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.157718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.157737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.157754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.157767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.157786 | orchestrator | 2025-11-01 14:20:40.157798 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-11-01 14:20:40.157832 | orchestrator | Saturday 01 November 2025 14:18:14 +0000 (0:00:07.118) 0:01:04.519 ***** 2025-11-01 14:20:40.157845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 14:20:40.157859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 14:20:40.157870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 14:20:40.157893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 14:20:40.157905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 14:20:40.157923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 14:20:40.157935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.157946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.157957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.157976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.157992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.158004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.158065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.158081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.158092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.158103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.158122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.158139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.158160 | orchestrator | 2025-11-01 14:20:40.158172 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-11-01 14:20:40.158183 | orchestrator | Saturday 01 November 2025 14:18:39 +0000 (0:00:25.190) 0:01:29.710 ***** 2025-11-01 14:20:40.158194 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-11-01 14:20:40.158205 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-11-01 14:20:40.158216 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-11-01 14:20:40.158227 | orchestrator | 2025-11-01 14:20:40.158237 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-11-01 14:20:40.158248 | orchestrator | Saturday 01 November 2025 14:18:45 +0000 (0:00:06.302) 0:01:36.013 ***** 2025-11-01 14:20:40.158259 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-11-01 14:20:40.158269 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-11-01 14:20:40.158280 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-11-01 14:20:40.158291 | orchestrator | 2025-11-01 14:20:40.158301 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-11-01 14:20:40.158312 | orchestrator | Saturday 01 November 2025 14:18:49 +0000 (0:00:04.241) 0:01:40.254 ***** 2025-11-01 14:20:40.158323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 14:20:40.158335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 14:20:40.158356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 14:20:40.158377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 14:20:40.158389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 14:20:40.158400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.158412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.158423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.158434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.158466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.158478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.158489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 14:20:40.158501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.158534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.158546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.158562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.158585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.158597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.158608 | orchestrator | 2025-11-01 14:20:40.158619 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-11-01 14:20:40.158630 | orchestrator | Saturday 01 November 2025 14:18:54 +0000 (0:00:04.336) 0:01:44.591 ***** 2025-11-01 14:20:40.158641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 14:20:40.158653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 14:20:40.158664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 14:20:40.158693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 14:20:40.158706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.158717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.158728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.158740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 14:20:40.158751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.158775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.158791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.158803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 14:20:40.158815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.158826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.158837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.158849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.158872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.158888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.158900 | orchestrator | 2025-11-01 14:20:40.158911 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-11-01 14:20:40.158922 | orchestrator | Saturday 01 November 2025 14:18:57 +0000 (0:00:03.672) 0:01:48.264 ***** 2025-11-01 14:20:40.158933 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:20:40.158944 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:20:40.158955 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:20:40.158966 | orchestrator | 2025-11-01 14:20:40.158976 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-11-01 14:20:40.158987 | orchestrator | Saturday 01 November 2025 14:18:58 +0000 (0:00:00.950) 0:01:49.214 ***** 2025-11-01 14:20:40.158999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 14:20:40.159010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-01 14:20:40.159021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.159039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.159062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.159074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.159085 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:20:40.159097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 14:20:40.159108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-01 14:20:40.159120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.159137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.159154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.159170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.159182 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:20:40.159193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 14:20:40.159205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-01 14:20:40.159216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.159233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.159250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.159267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:20:40.159278 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:20:40.159289 | orchestrator | 2025-11-01 14:20:40.159300 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-11-01 14:20:40.159311 | orchestrator | Saturday 01 November 2025 14:18:59 +0000 (0:00:01.009) 0:01:50.223 ***** 2025-11-01 14:20:40.159322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 14:20:40.159334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 14:20:40.159354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 14:20:40.159366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 14:20:40.159389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 14:20:40.159401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.159412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.159424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 14:20:40.159441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.159453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.159470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.159487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.159499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.159562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.159581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.159593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.159604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.159622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:20:40.159633 | orchestrator | 2025-11-01 14:20:40.159644 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-11-01 14:20:40.159660 | orchestrator | Saturday 01 November 2025 14:19:06 +0000 (0:00:06.532) 0:01:56.755 ***** 2025-11-01 14:20:40.159672 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:20:40.159683 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:20:40.159693 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:20:40.159704 | orchestrator | 2025-11-01 14:20:40.159715 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-11-01 14:20:40.159725 | orchestrator | Saturday 01 November 2025 14:19:06 +0000 (0:00:00.449) 0:01:57.205 ***** 2025-11-01 14:20:40.159736 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-11-01 14:20:40.159746 | orchestrator | 2025-11-01 14:20:40.159755 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-11-01 14:20:40.159765 | orchestrator | Saturday 01 November 2025 14:19:09 +0000 (0:00:02.271) 0:01:59.476 ***** 2025-11-01 14:20:40.159774 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-01 14:20:40.159784 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-11-01 14:20:40.159793 | orchestrator | 2025-11-01 14:20:40.159803 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-11-01 14:20:40.159812 | orchestrator | Saturday 01 November 2025 14:19:11 +0000 (0:00:02.750) 0:02:02.227 ***** 2025-11-01 14:20:40.159828 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:20:40.159837 | orchestrator | 2025-11-01 14:20:40.159847 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-11-01 14:20:40.159856 | orchestrator | Saturday 01 November 2025 14:19:29 +0000 (0:00:17.555) 0:02:19.782 ***** 2025-11-01 14:20:40.159866 | orchestrator | 2025-11-01 14:20:40.159875 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-11-01 14:20:40.159885 | orchestrator | Saturday 01 November 2025 14:19:29 +0000 (0:00:00.338) 0:02:20.121 ***** 2025-11-01 14:20:40.159894 | orchestrator | 2025-11-01 14:20:40.159904 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-11-01 14:20:40.159913 | orchestrator | Saturday 01 November 2025 14:19:29 +0000 (0:00:00.071) 0:02:20.193 ***** 2025-11-01 14:20:40.159923 | orchestrator | 2025-11-01 14:20:40.159932 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-11-01 14:20:40.159942 | orchestrator | Saturday 01 November 2025 14:19:29 +0000 (0:00:00.078) 0:02:20.271 ***** 2025-11-01 14:20:40.159951 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:20:40.159961 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:20:40.159971 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:20:40.159980 | orchestrator | 2025-11-01 14:20:40.159989 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-11-01 14:20:40.159999 | orchestrator | Saturday 01 November 2025 14:19:40 +0000 (0:00:11.002) 0:02:31.274 ***** 2025-11-01 14:20:40.160008 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:20:40.160018 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:20:40.160027 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:20:40.160037 | orchestrator | 2025-11-01 14:20:40.160046 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-11-01 14:20:40.160056 | orchestrator | Saturday 01 November 2025 14:19:54 +0000 (0:00:13.271) 0:02:44.546 ***** 2025-11-01 14:20:40.160065 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:20:40.160074 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:20:40.160084 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:20:40.160093 | orchestrator | 2025-11-01 14:20:40.160103 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-11-01 14:20:40.160112 | orchestrator | Saturday 01 November 2025 14:20:05 +0000 (0:00:11.057) 0:02:55.603 ***** 2025-11-01 14:20:40.160121 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:20:40.160131 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:20:40.160140 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:20:40.160150 | orchestrator | 2025-11-01 14:20:40.160159 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-11-01 14:20:40.160169 | orchestrator | Saturday 01 November 2025 14:20:16 +0000 (0:00:11.010) 0:03:06.613 ***** 2025-11-01 14:20:40.160178 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:20:40.160188 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:20:40.160197 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:20:40.160206 | orchestrator | 2025-11-01 14:20:40.160216 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-11-01 14:20:40.160225 | orchestrator | Saturday 01 November 2025 14:20:22 +0000 (0:00:06.431) 0:03:13.045 ***** 2025-11-01 14:20:40.160235 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:20:40.160244 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:20:40.160254 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:20:40.160263 | orchestrator | 2025-11-01 14:20:40.160272 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-11-01 14:20:40.160282 | orchestrator | Saturday 01 November 2025 14:20:31 +0000 (0:00:09.103) 0:03:22.149 ***** 2025-11-01 14:20:40.160292 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:20:40.160301 | orchestrator | 2025-11-01 14:20:40.160311 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:20:40.160320 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-01 14:20:40.160336 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 14:20:40.160351 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 14:20:40.160361 | orchestrator | 2025-11-01 14:20:40.160371 | orchestrator | 2025-11-01 14:20:40.160380 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:20:40.160390 | orchestrator | Saturday 01 November 2025 14:20:38 +0000 (0:00:07.215) 0:03:29.364 ***** 2025-11-01 14:20:40.160399 | orchestrator | =============================================================================== 2025-11-01 14:20:40.160409 | orchestrator | designate : Copying over designate.conf -------------------------------- 25.19s 2025-11-01 14:20:40.160423 | orchestrator | designate : Running Designate bootstrap container ---------------------- 17.56s 2025-11-01 14:20:40.160433 | orchestrator | service-ks-register : designate | Creating projects -------------------- 16.42s 2025-11-01 14:20:40.160443 | orchestrator | designate : Restart designate-api container ---------------------------- 13.27s 2025-11-01 14:20:40.160452 | orchestrator | designate : Restart designate-central container ------------------------ 11.06s 2025-11-01 14:20:40.160462 | orchestrator | designate : Restart designate-producer container ----------------------- 11.01s 2025-11-01 14:20:40.160471 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 11.00s 2025-11-01 14:20:40.160481 | orchestrator | designate : Restart designate-worker container -------------------------- 9.10s 2025-11-01 14:20:40.160490 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.75s 2025-11-01 14:20:40.160499 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.22s 2025-11-01 14:20:40.160522 | orchestrator | designate : Copying over config.json files for services ----------------- 7.12s 2025-11-01 14:20:40.160532 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.57s 2025-11-01 14:20:40.160541 | orchestrator | designate : Check designate containers ---------------------------------- 6.53s 2025-11-01 14:20:40.160551 | orchestrator | designate : Restart designate-mdns container ---------------------------- 6.43s 2025-11-01 14:20:40.160560 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.30s 2025-11-01 14:20:40.160570 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.37s 2025-11-01 14:20:40.160579 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 4.34s 2025-11-01 14:20:40.160589 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.24s 2025-11-01 14:20:40.160598 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.22s 2025-11-01 14:20:40.160607 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.96s 2025-11-01 14:20:40.160617 | orchestrator | 2025-11-01 14:20:40 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:20:40.160627 | orchestrator | 2025-11-01 14:20:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:20:43.203831 | orchestrator | 2025-11-01 14:20:43 | INFO  | Task e70d5da2-52c9-4eb2-acfd-801329b50649 is in state STARTED 2025-11-01 14:20:43.204469 | orchestrator | 2025-11-01 14:20:43 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:20:43.205480 | orchestrator | 2025-11-01 14:20:43 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:20:43.207204 | orchestrator | 2025-11-01 14:20:43 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:20:43.207423 | orchestrator | 2025-11-01 14:20:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:20:46.246966 | orchestrator | 2025-11-01 14:20:46 | INFO  | Task e70d5da2-52c9-4eb2-acfd-801329b50649 is in state STARTED 2025-11-01 14:20:46.247077 | orchestrator | 2025-11-01 14:20:46 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:20:46.247091 | orchestrator | 2025-11-01 14:20:46 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:20:46.247102 | orchestrator | 2025-11-01 14:20:46 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:20:46.247113 | orchestrator | 2025-11-01 14:20:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:20:49.305482 | orchestrator | 2025-11-01 14:20:49 | INFO  | Task e70d5da2-52c9-4eb2-acfd-801329b50649 is in state SUCCESS 2025-11-01 14:20:49.307912 | orchestrator | 2025-11-01 14:20:49.307951 | orchestrator | 2025-11-01 14:20:49.307963 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 14:20:49.307974 | orchestrator | 2025-11-01 14:20:49.307984 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 14:20:49.307994 | orchestrator | Saturday 01 November 2025 14:19:27 +0000 (0:00:00.326) 0:00:00.326 ***** 2025-11-01 14:20:49.308004 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:20:49.308015 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:20:49.308025 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:20:49.308034 | orchestrator | 2025-11-01 14:20:49.308044 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 14:20:49.308054 | orchestrator | Saturday 01 November 2025 14:19:27 +0000 (0:00:00.357) 0:00:00.683 ***** 2025-11-01 14:20:49.308064 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-11-01 14:20:49.308074 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-11-01 14:20:49.308083 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-11-01 14:20:49.308093 | orchestrator | 2025-11-01 14:20:49.308102 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-11-01 14:20:49.308112 | orchestrator | 2025-11-01 14:20:49.308121 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-11-01 14:20:49.308131 | orchestrator | Saturday 01 November 2025 14:19:28 +0000 (0:00:00.921) 0:00:01.605 ***** 2025-11-01 14:20:49.308140 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:20:49.308151 | orchestrator | 2025-11-01 14:20:49.308176 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-11-01 14:20:49.308186 | orchestrator | Saturday 01 November 2025 14:19:30 +0000 (0:00:01.304) 0:00:02.910 ***** 2025-11-01 14:20:49.308196 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-11-01 14:20:49.308205 | orchestrator | 2025-11-01 14:20:49.308215 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-11-01 14:20:49.308225 | orchestrator | Saturday 01 November 2025 14:19:34 +0000 (0:00:04.008) 0:00:06.919 ***** 2025-11-01 14:20:49.308235 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-11-01 14:20:49.308245 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-11-01 14:20:49.308254 | orchestrator | 2025-11-01 14:20:49.308264 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-11-01 14:20:49.308273 | orchestrator | Saturday 01 November 2025 14:19:41 +0000 (0:00:07.116) 0:00:14.035 ***** 2025-11-01 14:20:49.308283 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-01 14:20:49.308292 | orchestrator | 2025-11-01 14:20:49.308302 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-11-01 14:20:49.308311 | orchestrator | Saturday 01 November 2025 14:19:45 +0000 (0:00:03.725) 0:00:17.761 ***** 2025-11-01 14:20:49.308321 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-01 14:20:49.308330 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-11-01 14:20:49.308358 | orchestrator | 2025-11-01 14:20:49.308368 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-11-01 14:20:49.308377 | orchestrator | Saturday 01 November 2025 14:19:49 +0000 (0:00:04.432) 0:00:22.193 ***** 2025-11-01 14:20:49.308387 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-01 14:20:49.308396 | orchestrator | 2025-11-01 14:20:49.308405 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-11-01 14:20:49.308415 | orchestrator | Saturday 01 November 2025 14:19:53 +0000 (0:00:03.616) 0:00:25.810 ***** 2025-11-01 14:20:49.308425 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-11-01 14:20:49.308434 | orchestrator | 2025-11-01 14:20:49.308444 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-11-01 14:20:49.308454 | orchestrator | Saturday 01 November 2025 14:19:57 +0000 (0:00:04.210) 0:00:30.020 ***** 2025-11-01 14:20:49.308470 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:20:49.308547 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:20:49.308574 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:20:49.308594 | orchestrator | 2025-11-01 14:20:49.308620 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-11-01 14:20:49.308643 | orchestrator | Saturday 01 November 2025 14:19:57 +0000 (0:00:00.334) 0:00:30.355 ***** 2025-11-01 14:20:49.308665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 14:20:49.308711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 14:20:49.308745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 14:20:49.308777 | orchestrator | 2025-11-01 14:20:49.308790 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-11-01 14:20:49.308803 | orchestrator | Saturday 01 November 2025 14:19:58 +0000 (0:00:00.849) 0:00:31.205 ***** 2025-11-01 14:20:49.308814 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:20:49.308826 | orchestrator | 2025-11-01 14:20:49.308838 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-11-01 14:20:49.308851 | orchestrator | Saturday 01 November 2025 14:19:58 +0000 (0:00:00.126) 0:00:31.332 ***** 2025-11-01 14:20:49.308862 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:20:49.308872 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:20:49.308883 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:20:49.308894 | orchestrator | 2025-11-01 14:20:49.308904 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-11-01 14:20:49.308915 | orchestrator | Saturday 01 November 2025 14:19:59 +0000 (0:00:00.500) 0:00:31.832 ***** 2025-11-01 14:20:49.308926 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:20:49.308936 | orchestrator | 2025-11-01 14:20:49.308947 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-11-01 14:20:49.308958 | orchestrator | Saturday 01 November 2025 14:19:59 +0000 (0:00:00.555) 0:00:32.387 ***** 2025-11-01 14:20:49.308969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 14:20:49.308990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 14:20:49.309007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 14:20:49.309026 | orchestrator | 2025-11-01 14:20:49.309037 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-11-01 14:20:49.309048 | orchestrator | Saturday 01 November 2025 14:20:01 +0000 (0:00:01.510) 0:00:33.897 ***** 2025-11-01 14:20:49.309059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-01 14:20:49.309070 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:20:49.309082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-01 14:20:49.309093 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:20:49.309111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-01 14:20:49.309122 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:20:49.309133 | orchestrator | 2025-11-01 14:20:49.309144 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-11-01 14:20:49.309155 | orchestrator | Saturday 01 November 2025 14:20:02 +0000 (0:00:00.955) 0:00:34.852 ***** 2025-11-01 14:20:49.309171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-01 14:20:49.309189 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:20:49.309200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-01 14:20:49.309211 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:20:49.309222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-01 14:20:49.309233 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:20:49.309244 | orchestrator | 2025-11-01 14:20:49.309255 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-11-01 14:20:49.309265 | orchestrator | Saturday 01 November 2025 14:20:02 +0000 (0:00:00.713) 0:00:35.566 ***** 2025-11-01 14:20:49.309281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 14:20:49.309293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 14:20:49.309316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 14:20:49.309327 | orchestrator | 2025-11-01 14:20:49.309338 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-11-01 14:20:49.309349 | orchestrator | Saturday 01 November 2025 14:20:04 +0000 (0:00:01.425) 0:00:36.991 ***** 2025-11-01 14:20:49.309360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 14:20:49.309372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 14:20:49.309391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 14:20:49.309409 | orchestrator | 2025-11-01 14:20:49.309420 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-11-01 14:20:49.309431 | orchestrator | Saturday 01 November 2025 14:20:07 +0000 (0:00:03.097) 0:00:40.089 ***** 2025-11-01 14:20:49.309442 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-11-01 14:20:49.309453 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-11-01 14:20:49.309464 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-11-01 14:20:49.309475 | orchestrator | 2025-11-01 14:20:49.309490 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-11-01 14:20:49.309502 | orchestrator | Saturday 01 November 2025 14:20:08 +0000 (0:00:01.484) 0:00:41.573 ***** 2025-11-01 14:20:49.309545 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:20:49.309556 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:20:49.309567 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:20:49.309578 | orchestrator | 2025-11-01 14:20:49.309588 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-11-01 14:20:49.309599 | orchestrator | Saturday 01 November 2025 14:20:10 +0000 (0:00:01.444) 0:00:43.018 ***** 2025-11-01 14:20:49.309610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-01 14:20:49.309621 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:20:49.309633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-01 14:20:49.309644 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:20:49.309663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-01 14:20:49.309682 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:20:49.309693 | orchestrator | 2025-11-01 14:20:49.309704 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-11-01 14:20:49.309714 | orchestrator | Saturday 01 November 2025 14:20:10 +0000 (0:00:00.545) 0:00:43.564 ***** 2025-11-01 14:20:49.309736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 14:20:49.309749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 14:20:49.309760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 14:20:49.309771 | orchestrator | 2025-11-01 14:20:49.309782 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-11-01 14:20:49.309793 | orchestrator | Saturday 01 November 2025 14:20:11 +0000 (0:00:01.114) 0:00:44.678 ***** 2025-11-01 14:20:49.309804 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:20:49.309815 | orchestrator | 2025-11-01 14:20:49.309831 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-11-01 14:20:49.309890 | orchestrator | Saturday 01 November 2025 14:20:14 +0000 (0:00:02.750) 0:00:47.429 ***** 2025-11-01 14:20:49.309902 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:20:49.309913 | orchestrator | 2025-11-01 14:20:49.309929 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-11-01 14:20:49.309947 | orchestrator | Saturday 01 November 2025 14:20:17 +0000 (0:00:02.520) 0:00:49.950 ***** 2025-11-01 14:20:49.309965 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:20:49.309982 | orchestrator | 2025-11-01 14:20:49.310009 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-11-01 14:20:49.310109 | orchestrator | Saturday 01 November 2025 14:20:34 +0000 (0:00:17.366) 0:01:07.316 ***** 2025-11-01 14:20:49.310131 | orchestrator | 2025-11-01 14:20:49.310151 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-11-01 14:20:49.310168 | orchestrator | Saturday 01 November 2025 14:20:34 +0000 (0:00:00.075) 0:01:07.392 ***** 2025-11-01 14:20:49.310186 | orchestrator | 2025-11-01 14:20:49.310222 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-11-01 14:20:49.310249 | orchestrator | Saturday 01 November 2025 14:20:34 +0000 (0:00:00.062) 0:01:07.455 ***** 2025-11-01 14:20:49.310268 | orchestrator | 2025-11-01 14:20:49.310286 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-11-01 14:20:49.310303 | orchestrator | Saturday 01 November 2025 14:20:34 +0000 (0:00:00.069) 0:01:07.524 ***** 2025-11-01 14:20:49.310314 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:20:49.310325 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:20:49.310335 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:20:49.310346 | orchestrator | 2025-11-01 14:20:49.310357 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:20:49.310368 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 14:20:49.310381 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-01 14:20:49.310391 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-01 14:20:49.310402 | orchestrator | 2025-11-01 14:20:49.310412 | orchestrator | 2025-11-01 14:20:49.310423 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:20:49.310434 | orchestrator | Saturday 01 November 2025 14:20:45 +0000 (0:00:10.764) 0:01:18.288 ***** 2025-11-01 14:20:49.310445 | orchestrator | =============================================================================== 2025-11-01 14:20:49.310455 | orchestrator | placement : Running placement bootstrap container ---------------------- 17.37s 2025-11-01 14:20:49.310466 | orchestrator | placement : Restart placement-api container ---------------------------- 10.76s 2025-11-01 14:20:49.310477 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.12s 2025-11-01 14:20:49.310487 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.43s 2025-11-01 14:20:49.310498 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.21s 2025-11-01 14:20:49.310560 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.01s 2025-11-01 14:20:49.310607 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.73s 2025-11-01 14:20:49.310618 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.62s 2025-11-01 14:20:49.310629 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.10s 2025-11-01 14:20:49.310640 | orchestrator | placement : Creating placement databases -------------------------------- 2.75s 2025-11-01 14:20:49.310651 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.52s 2025-11-01 14:20:49.310672 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.51s 2025-11-01 14:20:49.310683 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.48s 2025-11-01 14:20:49.310694 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.44s 2025-11-01 14:20:49.310705 | orchestrator | placement : Copying over config.json files for services ----------------- 1.43s 2025-11-01 14:20:49.310715 | orchestrator | placement : include_tasks ----------------------------------------------- 1.31s 2025-11-01 14:20:49.310726 | orchestrator | placement : Check placement containers ---------------------------------- 1.11s 2025-11-01 14:20:49.310736 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.96s 2025-11-01 14:20:49.310747 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.92s 2025-11-01 14:20:49.310757 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.85s 2025-11-01 14:20:49.310768 | orchestrator | 2025-11-01 14:20:49 | INFO  | Task 8c98ffe9-09c3-445c-886c-6f03770467c8 is in state STARTED 2025-11-01 14:20:49.310786 | orchestrator | 2025-11-01 14:20:49 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:20:49.311742 | orchestrator | 2025-11-01 14:20:49 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:20:49.313400 | orchestrator | 2025-11-01 14:20:49 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:20:49.313438 | orchestrator | 2025-11-01 14:20:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:20:52.362750 | orchestrator | 2025-11-01 14:20:52 | INFO  | Task 8c98ffe9-09c3-445c-886c-6f03770467c8 is in state STARTED 2025-11-01 14:20:52.363949 | orchestrator | 2025-11-01 14:20:52 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:20:52.364993 | orchestrator | 2025-11-01 14:20:52 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:20:52.366116 | orchestrator | 2025-11-01 14:20:52 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:20:52.366286 | orchestrator | 2025-11-01 14:20:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:20:55.416841 | orchestrator | 2025-11-01 14:20:55 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:20:55.419676 | orchestrator | 2025-11-01 14:20:55 | INFO  | Task 8c98ffe9-09c3-445c-886c-6f03770467c8 is in state SUCCESS 2025-11-01 14:20:55.422984 | orchestrator | 2025-11-01 14:20:55 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:20:55.425157 | orchestrator | 2025-11-01 14:20:55 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:20:55.426903 | orchestrator | 2025-11-01 14:20:55 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:20:55.426923 | orchestrator | 2025-11-01 14:20:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:20:58.481224 | orchestrator | 2025-11-01 14:20:58 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:20:58.484844 | orchestrator | 2025-11-01 14:20:58 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:20:58.487726 | orchestrator | 2025-11-01 14:20:58 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:20:58.490099 | orchestrator | 2025-11-01 14:20:58 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:20:58.491079 | orchestrator | 2025-11-01 14:20:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:21:01.529245 | orchestrator | 2025-11-01 14:21:01 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:21:01.531091 | orchestrator | 2025-11-01 14:21:01 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:21:01.532107 | orchestrator | 2025-11-01 14:21:01 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:21:01.534412 | orchestrator | 2025-11-01 14:21:01 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:21:01.535568 | orchestrator | 2025-11-01 14:21:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:21:04.587056 | orchestrator | 2025-11-01 14:21:04 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:21:04.588494 | orchestrator | 2025-11-01 14:21:04 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:21:04.590633 | orchestrator | 2025-11-01 14:21:04 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:21:04.592078 | orchestrator | 2025-11-01 14:21:04 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:21:04.592109 | orchestrator | 2025-11-01 14:21:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:21:07.637791 | orchestrator | 2025-11-01 14:21:07 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:21:07.637874 | orchestrator | 2025-11-01 14:21:07 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:21:07.638556 | orchestrator | 2025-11-01 14:21:07 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:21:07.639423 | orchestrator | 2025-11-01 14:21:07 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:21:07.639527 | orchestrator | 2025-11-01 14:21:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:21:10.691727 | orchestrator | 2025-11-01 14:21:10 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:21:10.695269 | orchestrator | 2025-11-01 14:21:10 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:21:10.696621 | orchestrator | 2025-11-01 14:21:10 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:21:10.698101 | orchestrator | 2025-11-01 14:21:10 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:21:10.698127 | orchestrator | 2025-11-01 14:21:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:21:13.733204 | orchestrator | 2025-11-01 14:21:13 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:21:13.733764 | orchestrator | 2025-11-01 14:21:13 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:21:13.734651 | orchestrator | 2025-11-01 14:21:13 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:21:13.735376 | orchestrator | 2025-11-01 14:21:13 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:21:13.735586 | orchestrator | 2025-11-01 14:21:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:21:16.830935 | orchestrator | 2025-11-01 14:21:16 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:21:16.832037 | orchestrator | 2025-11-01 14:21:16 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:21:16.832976 | orchestrator | 2025-11-01 14:21:16 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:21:16.833795 | orchestrator | 2025-11-01 14:21:16 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:21:16.833843 | orchestrator | 2025-11-01 14:21:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:21:19.871674 | orchestrator | 2025-11-01 14:21:19 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:21:19.872380 | orchestrator | 2025-11-01 14:21:19 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:21:19.873381 | orchestrator | 2025-11-01 14:21:19 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:21:19.875087 | orchestrator | 2025-11-01 14:21:19 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:21:19.875147 | orchestrator | 2025-11-01 14:21:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:21:22.926825 | orchestrator | 2025-11-01 14:21:22 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:21:22.927599 | orchestrator | 2025-11-01 14:21:22 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:21:22.928348 | orchestrator | 2025-11-01 14:21:22 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:21:22.929272 | orchestrator | 2025-11-01 14:21:22 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:21:22.929295 | orchestrator | 2025-11-01 14:21:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:21:25.977777 | orchestrator | 2025-11-01 14:21:25 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:21:25.980300 | orchestrator | 2025-11-01 14:21:25 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:21:25.983761 | orchestrator | 2025-11-01 14:21:25 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:21:25.986692 | orchestrator | 2025-11-01 14:21:25 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:21:25.986716 | orchestrator | 2025-11-01 14:21:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:21:29.036882 | orchestrator | 2025-11-01 14:21:29 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:21:29.037018 | orchestrator | 2025-11-01 14:21:29 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:21:29.037821 | orchestrator | 2025-11-01 14:21:29 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:21:29.041880 | orchestrator | 2025-11-01 14:21:29 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:21:29.041967 | orchestrator | 2025-11-01 14:21:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:21:32.087403 | orchestrator | 2025-11-01 14:21:32 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:21:32.087883 | orchestrator | 2025-11-01 14:21:32 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:21:32.088942 | orchestrator | 2025-11-01 14:21:32 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:21:32.089832 | orchestrator | 2025-11-01 14:21:32 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:21:32.089853 | orchestrator | 2025-11-01 14:21:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:21:35.131210 | orchestrator | 2025-11-01 14:21:35 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:21:35.132699 | orchestrator | 2025-11-01 14:21:35 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:21:35.133656 | orchestrator | 2025-11-01 14:21:35 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:21:35.134576 | orchestrator | 2025-11-01 14:21:35 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:21:35.134787 | orchestrator | 2025-11-01 14:21:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:21:38.189272 | orchestrator | 2025-11-01 14:21:38 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:21:38.189368 | orchestrator | 2025-11-01 14:21:38 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:21:38.189382 | orchestrator | 2025-11-01 14:21:38 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:21:38.189394 | orchestrator | 2025-11-01 14:21:38 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:21:38.189405 | orchestrator | 2025-11-01 14:21:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:21:41.258151 | orchestrator | 2025-11-01 14:21:41 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:21:41.258267 | orchestrator | 2025-11-01 14:21:41 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:21:41.258292 | orchestrator | 2025-11-01 14:21:41 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:21:41.258313 | orchestrator | 2025-11-01 14:21:41 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:21:41.258331 | orchestrator | 2025-11-01 14:21:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:21:44.262792 | orchestrator | 2025-11-01 14:21:44 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:21:44.263708 | orchestrator | 2025-11-01 14:21:44 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:21:44.264631 | orchestrator | 2025-11-01 14:21:44 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:21:44.265654 | orchestrator | 2025-11-01 14:21:44 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:21:44.265789 | orchestrator | 2025-11-01 14:21:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:21:47.301010 | orchestrator | 2025-11-01 14:21:47 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:21:47.301103 | orchestrator | 2025-11-01 14:21:47 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:21:47.301122 | orchestrator | 2025-11-01 14:21:47 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:21:47.301140 | orchestrator | 2025-11-01 14:21:47 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:21:47.301157 | orchestrator | 2025-11-01 14:21:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:21:50.340016 | orchestrator | 2025-11-01 14:21:50 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:21:50.340982 | orchestrator | 2025-11-01 14:21:50 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:21:50.343479 | orchestrator | 2025-11-01 14:21:50 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:21:50.345000 | orchestrator | 2025-11-01 14:21:50 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:21:50.345021 | orchestrator | 2025-11-01 14:21:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:21:53.401376 | orchestrator | 2025-11-01 14:21:53 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:21:53.401779 | orchestrator | 2025-11-01 14:21:53 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:21:53.403103 | orchestrator | 2025-11-01 14:21:53 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:21:53.403969 | orchestrator | 2025-11-01 14:21:53 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:21:53.403987 | orchestrator | 2025-11-01 14:21:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:21:56.442855 | orchestrator | 2025-11-01 14:21:56 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:21:56.442946 | orchestrator | 2025-11-01 14:21:56 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:21:56.443925 | orchestrator | 2025-11-01 14:21:56 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:21:56.445830 | orchestrator | 2025-11-01 14:21:56 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:21:56.445851 | orchestrator | 2025-11-01 14:21:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:21:59.483308 | orchestrator | 2025-11-01 14:21:59 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:21:59.484701 | orchestrator | 2025-11-01 14:21:59 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:21:59.485844 | orchestrator | 2025-11-01 14:21:59 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:21:59.487283 | orchestrator | 2025-11-01 14:21:59 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:21:59.487306 | orchestrator | 2025-11-01 14:21:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:22:02.529129 | orchestrator | 2025-11-01 14:22:02 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:22:02.532385 | orchestrator | 2025-11-01 14:22:02 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:22:02.534162 | orchestrator | 2025-11-01 14:22:02 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:22:02.538231 | orchestrator | 2025-11-01 14:22:02 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:22:02.538261 | orchestrator | 2025-11-01 14:22:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:22:05.598893 | orchestrator | 2025-11-01 14:22:05 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:22:05.600294 | orchestrator | 2025-11-01 14:22:05 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:22:05.603221 | orchestrator | 2025-11-01 14:22:05 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:22:05.603988 | orchestrator | 2025-11-01 14:22:05 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:22:05.604013 | orchestrator | 2025-11-01 14:22:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:22:08.642682 | orchestrator | 2025-11-01 14:22:08 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:22:08.644908 | orchestrator | 2025-11-01 14:22:08 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:22:08.646405 | orchestrator | 2025-11-01 14:22:08 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:22:08.648186 | orchestrator | 2025-11-01 14:22:08 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:22:08.648464 | orchestrator | 2025-11-01 14:22:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:22:11.698606 | orchestrator | 2025-11-01 14:22:11 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:22:11.699877 | orchestrator | 2025-11-01 14:22:11 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state STARTED 2025-11-01 14:22:11.702147 | orchestrator | 2025-11-01 14:22:11 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:22:11.704532 | orchestrator | 2025-11-01 14:22:11 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:22:11.704565 | orchestrator | 2025-11-01 14:22:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:22:14.755119 | orchestrator | 2025-11-01 14:22:14 | INFO  | Task f5ad95fc-ae04-4476-b423-79ffffcc9243 is in state STARTED 2025-11-01 14:22:14.756382 | orchestrator | 2025-11-01 14:22:14 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:22:14.760497 | orchestrator | 2025-11-01 14:22:14.760567 | orchestrator | 2025-11-01 14:22:14.760580 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 14:22:14.760591 | orchestrator | 2025-11-01 14:22:14.760602 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 14:22:14.760614 | orchestrator | Saturday 01 November 2025 14:20:51 +0000 (0:00:00.214) 0:00:00.214 ***** 2025-11-01 14:22:14.760625 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:22:14.760637 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:22:14.760647 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:22:14.760658 | orchestrator | 2025-11-01 14:22:14.760669 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 14:22:14.760680 | orchestrator | Saturday 01 November 2025 14:20:52 +0000 (0:00:00.331) 0:00:00.545 ***** 2025-11-01 14:22:14.760691 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-11-01 14:22:14.760702 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-11-01 14:22:14.760712 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-11-01 14:22:14.760723 | orchestrator | 2025-11-01 14:22:14.760733 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-11-01 14:22:14.760744 | orchestrator | 2025-11-01 14:22:14.760754 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-11-01 14:22:14.760765 | orchestrator | Saturday 01 November 2025 14:20:52 +0000 (0:00:00.876) 0:00:01.422 ***** 2025-11-01 14:22:14.760775 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:22:14.760786 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:22:14.760796 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:22:14.760807 | orchestrator | 2025-11-01 14:22:14.760817 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:22:14.760829 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:22:14.760841 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:22:14.760853 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:22:14.760864 | orchestrator | 2025-11-01 14:22:14.760874 | orchestrator | 2025-11-01 14:22:14.760885 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:22:14.760895 | orchestrator | Saturday 01 November 2025 14:20:53 +0000 (0:00:00.719) 0:00:02.141 ***** 2025-11-01 14:22:14.760906 | orchestrator | =============================================================================== 2025-11-01 14:22:14.760916 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.88s 2025-11-01 14:22:14.760927 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.72s 2025-11-01 14:22:14.760961 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-11-01 14:22:14.760972 | orchestrator | 2025-11-01 14:22:14.761018 | orchestrator | 2025-11-01 14:22:14 | INFO  | Task 6ed57eb0-6997-460a-8baf-57a5919c05ba is in state SUCCESS 2025-11-01 14:22:14.763051 | orchestrator | 2025-11-01 14:22:14.763154 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 14:22:14.763168 | orchestrator | 2025-11-01 14:22:14.763227 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 14:22:14.763241 | orchestrator | Saturday 01 November 2025 14:17:09 +0000 (0:00:00.383) 0:00:00.383 ***** 2025-11-01 14:22:14.763252 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:22:14.763263 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:22:14.763337 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:22:14.763350 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:22:14.763361 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:22:14.763371 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:22:14.763382 | orchestrator | 2025-11-01 14:22:14.763393 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 14:22:14.763404 | orchestrator | Saturday 01 November 2025 14:17:11 +0000 (0:00:01.204) 0:00:01.588 ***** 2025-11-01 14:22:14.763414 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-11-01 14:22:14.763425 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-11-01 14:22:14.763436 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-11-01 14:22:14.763446 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-11-01 14:22:14.763457 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-11-01 14:22:14.763467 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-11-01 14:22:14.763478 | orchestrator | 2025-11-01 14:22:14.763488 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-11-01 14:22:14.763522 | orchestrator | 2025-11-01 14:22:14.763534 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-11-01 14:22:14.763545 | orchestrator | Saturday 01 November 2025 14:17:12 +0000 (0:00:01.012) 0:00:02.600 ***** 2025-11-01 14:22:14.763555 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:22:14.763567 | orchestrator | 2025-11-01 14:22:14.763578 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-11-01 14:22:14.763588 | orchestrator | Saturday 01 November 2025 14:17:13 +0000 (0:00:01.453) 0:00:04.054 ***** 2025-11-01 14:22:14.763599 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:22:14.763610 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:22:14.763620 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:22:14.763679 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:22:14.763691 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:22:14.763702 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:22:14.763712 | orchestrator | 2025-11-01 14:22:14.763723 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-11-01 14:22:14.763734 | orchestrator | Saturday 01 November 2025 14:17:14 +0000 (0:00:01.449) 0:00:05.503 ***** 2025-11-01 14:22:14.763744 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:22:14.763755 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:22:14.763765 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:22:14.763776 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:22:14.763812 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:22:14.763823 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:22:14.763834 | orchestrator | 2025-11-01 14:22:14.763845 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-11-01 14:22:14.763856 | orchestrator | Saturday 01 November 2025 14:17:16 +0000 (0:00:01.208) 0:00:06.712 ***** 2025-11-01 14:22:14.763867 | orchestrator | ok: [testbed-node-0] => { 2025-11-01 14:22:14.763878 | orchestrator |  "changed": false, 2025-11-01 14:22:14.763889 | orchestrator |  "msg": "All assertions passed" 2025-11-01 14:22:14.763914 | orchestrator | } 2025-11-01 14:22:14.763925 | orchestrator | ok: [testbed-node-1] => { 2025-11-01 14:22:14.763935 | orchestrator |  "changed": false, 2025-11-01 14:22:14.763946 | orchestrator |  "msg": "All assertions passed" 2025-11-01 14:22:14.763956 | orchestrator | } 2025-11-01 14:22:14.763967 | orchestrator | ok: [testbed-node-2] => { 2025-11-01 14:22:14.763977 | orchestrator |  "changed": false, 2025-11-01 14:22:14.763988 | orchestrator |  "msg": "All assertions passed" 2025-11-01 14:22:14.763998 | orchestrator | } 2025-11-01 14:22:14.764009 | orchestrator | ok: [testbed-node-3] => { 2025-11-01 14:22:14.764019 | orchestrator |  "changed": false, 2025-11-01 14:22:14.764030 | orchestrator |  "msg": "All assertions passed" 2025-11-01 14:22:14.764040 | orchestrator | } 2025-11-01 14:22:14.764051 | orchestrator | ok: [testbed-node-4] => { 2025-11-01 14:22:14.764061 | orchestrator |  "changed": false, 2025-11-01 14:22:14.764072 | orchestrator |  "msg": "All assertions passed" 2025-11-01 14:22:14.764082 | orchestrator | } 2025-11-01 14:22:14.764093 | orchestrator | ok: [testbed-node-5] => { 2025-11-01 14:22:14.764103 | orchestrator |  "changed": false, 2025-11-01 14:22:14.764114 | orchestrator |  "msg": "All assertions passed" 2025-11-01 14:22:14.764124 | orchestrator | } 2025-11-01 14:22:14.764135 | orchestrator | 2025-11-01 14:22:14.764145 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-11-01 14:22:14.764156 | orchestrator | Saturday 01 November 2025 14:17:17 +0000 (0:00:00.990) 0:00:07.703 ***** 2025-11-01 14:22:14.764167 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.764178 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.764188 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.764199 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.764209 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.764220 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.764230 | orchestrator | 2025-11-01 14:22:14.764241 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-11-01 14:22:14.764251 | orchestrator | Saturday 01 November 2025 14:17:17 +0000 (0:00:00.704) 0:00:08.407 ***** 2025-11-01 14:22:14.764262 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-11-01 14:22:14.764273 | orchestrator | 2025-11-01 14:22:14.764283 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-11-01 14:22:14.764294 | orchestrator | Saturday 01 November 2025 14:17:21 +0000 (0:00:03.588) 0:00:11.995 ***** 2025-11-01 14:22:14.764304 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-11-01 14:22:14.764315 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-11-01 14:22:14.764326 | orchestrator | 2025-11-01 14:22:14.764349 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-11-01 14:22:14.764360 | orchestrator | Saturday 01 November 2025 14:17:29 +0000 (0:00:07.779) 0:00:19.775 ***** 2025-11-01 14:22:14.764370 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-01 14:22:14.764381 | orchestrator | 2025-11-01 14:22:14.764398 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-11-01 14:22:14.764409 | orchestrator | Saturday 01 November 2025 14:17:32 +0000 (0:00:03.372) 0:00:23.147 ***** 2025-11-01 14:22:14.764420 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-01 14:22:14.764431 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-11-01 14:22:14.764441 | orchestrator | 2025-11-01 14:22:14.764452 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-11-01 14:22:14.764463 | orchestrator | Saturday 01 November 2025 14:17:36 +0000 (0:00:03.993) 0:00:27.140 ***** 2025-11-01 14:22:14.764473 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-01 14:22:14.764484 | orchestrator | 2025-11-01 14:22:14.764494 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-11-01 14:22:14.764534 | orchestrator | Saturday 01 November 2025 14:17:39 +0000 (0:00:03.251) 0:00:30.391 ***** 2025-11-01 14:22:14.764552 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-11-01 14:22:14.764562 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-11-01 14:22:14.764573 | orchestrator | 2025-11-01 14:22:14.764584 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-11-01 14:22:14.764594 | orchestrator | Saturday 01 November 2025 14:17:46 +0000 (0:00:07.043) 0:00:37.434 ***** 2025-11-01 14:22:14.764605 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.764615 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.764626 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.764637 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.764647 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.764658 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.764668 | orchestrator | 2025-11-01 14:22:14.764679 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-11-01 14:22:14.764690 | orchestrator | Saturday 01 November 2025 14:17:47 +0000 (0:00:00.875) 0:00:38.310 ***** 2025-11-01 14:22:14.764700 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.764711 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.764721 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.764732 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.764742 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.764753 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.764763 | orchestrator | 2025-11-01 14:22:14.764774 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-11-01 14:22:14.764785 | orchestrator | Saturday 01 November 2025 14:17:49 +0000 (0:00:02.144) 0:00:40.455 ***** 2025-11-01 14:22:14.764796 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:22:14.764806 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:22:14.764817 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:22:14.764828 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:22:14.764838 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:22:14.764849 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:22:14.764859 | orchestrator | 2025-11-01 14:22:14.764870 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-11-01 14:22:14.764881 | orchestrator | Saturday 01 November 2025 14:17:51 +0000 (0:00:01.169) 0:00:41.624 ***** 2025-11-01 14:22:14.764891 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.764902 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.764913 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.764923 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.764934 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.764944 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.764955 | orchestrator | 2025-11-01 14:22:14.764965 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-11-01 14:22:14.764976 | orchestrator | Saturday 01 November 2025 14:17:54 +0000 (0:00:03.122) 0:00:44.747 ***** 2025-11-01 14:22:14.765009 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-01 14:22:14.765039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 14:22:14.765061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 14:22:14.765072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 14:22:14.765084 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-01 14:22:14.765095 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-01 14:22:14.765106 | orchestrator | 2025-11-01 14:22:14.765125 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-11-01 14:22:14.765137 | orchestrator | Saturday 01 November 2025 14:17:58 +0000 (0:00:03.827) 0:00:48.575 ***** 2025-11-01 14:22:14.765148 | orchestrator | [WARNING]: Skipped 2025-11-01 14:22:14.765158 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-11-01 14:22:14.765169 | orchestrator | due to this access issue: 2025-11-01 14:22:14.765180 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-11-01 14:22:14.765191 | orchestrator | a directory 2025-11-01 14:22:14.765201 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-01 14:22:14.765212 | orchestrator | 2025-11-01 14:22:14.765223 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-11-01 14:22:14.765238 | orchestrator | Saturday 01 November 2025 14:17:58 +0000 (0:00:00.900) 0:00:49.475 ***** 2025-11-01 14:22:14.765255 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:22:14.765267 | orchestrator | 2025-11-01 14:22:14.765278 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-11-01 14:22:14.765289 | orchestrator | Saturday 01 November 2025 14:18:00 +0000 (0:00:01.179) 0:00:50.654 ***** 2025-11-01 14:22:14.765300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 14:22:14.765312 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-01 14:22:14.765323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 14:22:14.765335 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-01 14:22:14.765365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 14:22:14.765378 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-01 14:22:14.765389 | orchestrator | 2025-11-01 14:22:14.765400 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-11-01 14:22:14.765410 | orchestrator | Saturday 01 November 2025 14:18:03 +0000 (0:00:03.169) 0:00:53.824 ***** 2025-11-01 14:22:14.765422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 14:22:14.765433 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.765444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 14:22:14.765464 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.765475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 14:22:14.765491 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.765526 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:22:14.765538 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.765549 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:22:14.765561 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.765572 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:22:14.765583 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.765593 | orchestrator | 2025-11-01 14:22:14.765604 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-11-01 14:22:14.765622 | orchestrator | Saturday 01 November 2025 14:18:06 +0000 (0:00:03.194) 0:00:57.019 ***** 2025-11-01 14:22:14.765634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 14:22:14.765645 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.765675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 14:22:14.765687 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.765698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 14:22:14.765710 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.765720 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:22:14.765732 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:22:14.765750 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.765761 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.765771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:22:14.765782 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.765793 | orchestrator | 2025-11-01 14:22:14.765804 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-11-01 14:22:14.765814 | orchestrator | Saturday 01 November 2025 14:18:09 +0000 (0:00:03.406) 0:01:00.425 ***** 2025-11-01 14:22:14.765825 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.765836 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.765846 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.765857 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.765868 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.765878 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.765889 | orchestrator | 2025-11-01 14:22:14.765899 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-11-01 14:22:14.765915 | orchestrator | Saturday 01 November 2025 14:18:12 +0000 (0:00:02.340) 0:01:02.766 ***** 2025-11-01 14:22:14.765926 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.765937 | orchestrator | 2025-11-01 14:22:14.765947 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-11-01 14:22:14.765963 | orchestrator | Saturday 01 November 2025 14:18:12 +0000 (0:00:00.125) 0:01:02.891 ***** 2025-11-01 14:22:14.765974 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.765984 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.765995 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.766005 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.766062 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.766076 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.766086 | orchestrator | 2025-11-01 14:22:14.766097 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-11-01 14:22:14.766108 | orchestrator | Saturday 01 November 2025 14:18:13 +0000 (0:00:00.880) 0:01:03.772 ***** 2025-11-01 14:22:14.766119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 14:22:14.766137 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.766148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 14:22:14.766159 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.766170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 14:22:14.766181 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.766712 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:22:14.766789 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.766820 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:22:14.766832 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.766844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:22:14.766877 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.766889 | orchestrator | 2025-11-01 14:22:14.766901 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-11-01 14:22:14.766913 | orchestrator | Saturday 01 November 2025 14:18:16 +0000 (0:00:03.571) 0:01:07.344 ***** 2025-11-01 14:22:14.766925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 14:22:14.766937 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-01 14:22:14.766970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 14:22:14.766983 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-01 14:22:14.767002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 14:22:14.767014 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-01 14:22:14.767025 | orchestrator | 2025-11-01 14:22:14.767036 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-11-01 14:22:14.767047 | orchestrator | Saturday 01 November 2025 14:18:23 +0000 (0:00:06.947) 0:01:14.291 ***** 2025-11-01 14:22:14.767058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 14:22:14.767083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 14:22:14.767102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 14:22:14.767113 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-01 14:22:14.767125 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-01 14:22:14.767137 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-01 14:22:14.767148 | orchestrator | 2025-11-01 14:22:14.767159 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-11-01 14:22:14.767170 | orchestrator | Saturday 01 November 2025 14:18:32 +0000 (0:00:08.444) 0:01:22.736 ***** 2025-11-01 14:22:14.767193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 14:22:14.767213 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.767224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 14:22:14.767235 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.767249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 14:22:14.767261 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.767274 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:22:14.767286 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.767299 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:22:14.767311 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.767334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:22:14.767355 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.767367 | orchestrator | 2025-11-01 14:22:14.767379 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-11-01 14:22:14.767391 | orchestrator | Saturday 01 November 2025 14:18:35 +0000 (0:00:02.992) 0:01:25.728 ***** 2025-11-01 14:22:14.767403 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.767415 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:22:14.767427 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.767439 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.767450 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:22:14.767462 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:22:14.767474 | orchestrator | 2025-11-01 14:22:14.767486 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-11-01 14:22:14.767529 | orchestrator | Saturday 01 November 2025 14:18:38 +0000 (0:00:03.711) 0:01:29.439 ***** 2025-11-01 14:22:14.767543 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:22:14.767556 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.767568 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:22:14.767581 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.767594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:22:14.767612 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.767636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 14:22:14.767649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 14:22:14.767661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 14:22:14.767672 | orchestrator | 2025-11-01 14:22:14.767683 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-11-01 14:22:14.767694 | orchestrator | Saturday 01 November 2025 14:18:43 +0000 (0:00:04.864) 0:01:34.303 ***** 2025-11-01 14:22:14.767705 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.767715 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.767725 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.767736 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.767747 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.767757 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.767768 | orchestrator | 2025-11-01 14:22:14.767778 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-11-01 14:22:14.767789 | orchestrator | Saturday 01 November 2025 14:18:46 +0000 (0:00:02.744) 0:01:37.047 ***** 2025-11-01 14:22:14.767800 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.767810 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.767821 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.767831 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.767842 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.767858 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.767869 | orchestrator | 2025-11-01 14:22:14.767880 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-11-01 14:22:14.767891 | orchestrator | Saturday 01 November 2025 14:18:49 +0000 (0:00:03.148) 0:01:40.196 ***** 2025-11-01 14:22:14.767901 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.767912 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.767922 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.767933 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.767943 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.767954 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.767964 | orchestrator | 2025-11-01 14:22:14.767975 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-11-01 14:22:14.767986 | orchestrator | Saturday 01 November 2025 14:18:53 +0000 (0:00:04.022) 0:01:44.219 ***** 2025-11-01 14:22:14.767996 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.768007 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.768018 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.768028 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.768039 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.768049 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.768060 | orchestrator | 2025-11-01 14:22:14.768071 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-11-01 14:22:14.768081 | orchestrator | Saturday 01 November 2025 14:18:56 +0000 (0:00:03.238) 0:01:47.457 ***** 2025-11-01 14:22:14.768092 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.768103 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.768113 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.768124 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.768140 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.768151 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.768162 | orchestrator | 2025-11-01 14:22:14.768172 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-11-01 14:22:14.768188 | orchestrator | Saturday 01 November 2025 14:18:59 +0000 (0:00:02.929) 0:01:50.386 ***** 2025-11-01 14:22:14.768199 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.768209 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.768220 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.768231 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.768241 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.768252 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.768262 | orchestrator | 2025-11-01 14:22:14.768273 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-11-01 14:22:14.768284 | orchestrator | Saturday 01 November 2025 14:19:03 +0000 (0:00:03.725) 0:01:54.112 ***** 2025-11-01 14:22:14.768294 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-11-01 14:22:14.768305 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.768316 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-11-01 14:22:14.768326 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.768337 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-11-01 14:22:14.768348 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.768358 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-11-01 14:22:14.768369 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.768380 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-11-01 14:22:14.768390 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.768401 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-11-01 14:22:14.768411 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.768434 | orchestrator | 2025-11-01 14:22:14.768445 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-11-01 14:22:14.768456 | orchestrator | Saturday 01 November 2025 14:19:06 +0000 (0:00:02.762) 0:01:56.874 ***** 2025-11-01 14:22:14.768467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 14:22:14.768478 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.768489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 14:22:14.768516 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.768538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 14:22:14.768550 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.768561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:22:14.768572 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.768583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:22:14.768600 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.768611 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:22:14.768623 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.768633 | orchestrator | 2025-11-01 14:22:14.768644 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-11-01 14:22:14.768655 | orchestrator | Saturday 01 November 2025 14:19:08 +0000 (0:00:02.211) 0:01:59.085 ***** 2025-11-01 14:22:14.768666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 14:22:14.768677 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.768699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 14:22:14.768711 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.768722 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:22:14.768739 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.768750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 14:22:14.768761 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.768772 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:22:14.768784 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.768795 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:22:14.768806 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.768816 | orchestrator | 2025-11-01 14:22:14.768827 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-11-01 14:22:14.768838 | orchestrator | Saturday 01 November 2025 14:19:11 +0000 (0:00:02.763) 0:02:01.849 ***** 2025-11-01 14:22:14.768852 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.768867 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.768879 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.768889 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.768899 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.768915 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.768926 | orchestrator | 2025-11-01 14:22:14.768937 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-11-01 14:22:14.768955 | orchestrator | Saturday 01 November 2025 14:19:14 +0000 (0:00:02.882) 0:02:04.732 ***** 2025-11-01 14:22:14.768965 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.768976 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.768986 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.768997 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:22:14.769008 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:22:14.769018 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:22:14.769029 | orchestrator | 2025-11-01 14:22:14.769040 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-11-01 14:22:14.769051 | orchestrator | Saturday 01 November 2025 14:19:17 +0000 (0:00:03.434) 0:02:08.167 ***** 2025-11-01 14:22:14.769061 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.769072 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.769082 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.769093 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.769104 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.769114 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.769125 | orchestrator | 2025-11-01 14:22:14.769135 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-11-01 14:22:14.769146 | orchestrator | Saturday 01 November 2025 14:19:19 +0000 (0:00:01.917) 0:02:10.084 ***** 2025-11-01 14:22:14.769157 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.769167 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.769178 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.769189 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.769199 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.769209 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.769220 | orchestrator | 2025-11-01 14:22:14.769231 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-11-01 14:22:14.769241 | orchestrator | Saturday 01 November 2025 14:19:21 +0000 (0:00:02.459) 0:02:12.543 ***** 2025-11-01 14:22:14.769252 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.769262 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.769273 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.769284 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.769294 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.769305 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.769316 | orchestrator | 2025-11-01 14:22:14.769326 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-11-01 14:22:14.769337 | orchestrator | Saturday 01 November 2025 14:19:24 +0000 (0:00:02.641) 0:02:15.185 ***** 2025-11-01 14:22:14.769348 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.769358 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.769369 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.769380 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.769390 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.769401 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.769411 | orchestrator | 2025-11-01 14:22:14.769422 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-11-01 14:22:14.769433 | orchestrator | Saturday 01 November 2025 14:19:27 +0000 (0:00:02.997) 0:02:18.183 ***** 2025-11-01 14:22:14.769444 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.769454 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.769465 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.769475 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.769486 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.769496 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.769523 | orchestrator | 2025-11-01 14:22:14.769534 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-11-01 14:22:14.769545 | orchestrator | Saturday 01 November 2025 14:19:30 +0000 (0:00:02.958) 0:02:21.142 ***** 2025-11-01 14:22:14.769555 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.769566 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.769587 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.769598 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.769608 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.769619 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.769630 | orchestrator | 2025-11-01 14:22:14.769640 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-11-01 14:22:14.769651 | orchestrator | Saturday 01 November 2025 14:19:34 +0000 (0:00:04.064) 0:02:25.207 ***** 2025-11-01 14:22:14.769662 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.769672 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.769683 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.769694 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.769704 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.769715 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.769725 | orchestrator | 2025-11-01 14:22:14.769736 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-11-01 14:22:14.769747 | orchestrator | Saturday 01 November 2025 14:19:36 +0000 (0:00:02.101) 0:02:27.308 ***** 2025-11-01 14:22:14.769758 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-11-01 14:22:14.769769 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.769780 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-11-01 14:22:14.769791 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.769801 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-11-01 14:22:14.769812 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.769823 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-11-01 14:22:14.769834 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.769850 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-11-01 14:22:14.769861 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.769877 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-11-01 14:22:14.769889 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.769899 | orchestrator | 2025-11-01 14:22:14.769910 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-11-01 14:22:14.769921 | orchestrator | Saturday 01 November 2025 14:19:39 +0000 (0:00:02.508) 0:02:29.816 ***** 2025-11-01 14:22:14.769932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 14:22:14.769944 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.769955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 14:22:14.769972 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.769984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 14:22:14.769995 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.770006 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:22:14.770066 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.770094 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:22:14.770106 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.770116 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 14:22:14.770128 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.770145 | orchestrator | 2025-11-01 14:22:14.770155 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-11-01 14:22:14.770166 | orchestrator | Saturday 01 November 2025 14:19:41 +0000 (0:00:02.254) 0:02:32.070 ***** 2025-11-01 14:22:14.770177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 14:22:14.770189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 14:22:14.770205 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-01 14:22:14.770222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 14:22:14.770234 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-01 14:22:14.770252 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-01 14:22:14.770263 | orchestrator | 2025-11-01 14:22:14.770273 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-11-01 14:22:14.770284 | orchestrator | Saturday 01 November 2025 14:19:45 +0000 (0:00:04.433) 0:02:36.504 ***** 2025-11-01 14:22:14.770295 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:14.770306 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:14.770316 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:14.770327 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:22:14.770338 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:22:14.770348 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:22:14.770359 | orchestrator | 2025-11-01 14:22:14.770369 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-11-01 14:22:14.770380 | orchestrator | Saturday 01 November 2025 14:19:46 +0000 (0:00:00.660) 0:02:37.164 ***** 2025-11-01 14:22:14.770391 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:22:14.770401 | orchestrator | 2025-11-01 14:22:14.770412 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-11-01 14:22:14.770423 | orchestrator | Saturday 01 November 2025 14:19:48 +0000 (0:00:02.386) 0:02:39.551 ***** 2025-11-01 14:22:14.770433 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:22:14.770444 | orchestrator | 2025-11-01 14:22:14.770454 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-11-01 14:22:14.770465 | orchestrator | Saturday 01 November 2025 14:19:51 +0000 (0:00:02.503) 0:02:42.054 ***** 2025-11-01 14:22:14.770475 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:22:14.770486 | orchestrator | 2025-11-01 14:22:14.770497 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-11-01 14:22:14.770526 | orchestrator | Saturday 01 November 2025 14:20:39 +0000 (0:00:47.653) 0:03:29.708 ***** 2025-11-01 14:22:14.770536 | orchestrator | 2025-11-01 14:22:14.770547 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-11-01 14:22:14.770558 | orchestrator | Saturday 01 November 2025 14:20:39 +0000 (0:00:00.079) 0:03:29.788 ***** 2025-11-01 14:22:14.770568 | orchestrator | 2025-11-01 14:22:14.770579 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-11-01 14:22:14.770590 | orchestrator | Saturday 01 November 2025 14:20:39 +0000 (0:00:00.294) 0:03:30.083 ***** 2025-11-01 14:22:14.770600 | orchestrator | 2025-11-01 14:22:14.770611 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-11-01 14:22:14.770622 | orchestrator | Saturday 01 November 2025 14:20:39 +0000 (0:00:00.144) 0:03:30.228 ***** 2025-11-01 14:22:14.770633 | orchestrator | 2025-11-01 14:22:14.770648 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-11-01 14:22:14.770660 | orchestrator | Saturday 01 November 2025 14:20:39 +0000 (0:00:00.071) 0:03:30.300 ***** 2025-11-01 14:22:14.770670 | orchestrator | 2025-11-01 14:22:14.770696 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-11-01 14:22:14.770707 | orchestrator | Saturday 01 November 2025 14:20:39 +0000 (0:00:00.065) 0:03:30.365 ***** 2025-11-01 14:22:14.770717 | orchestrator | 2025-11-01 14:22:14.770728 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-11-01 14:22:14.770739 | orchestrator | Saturday 01 November 2025 14:20:39 +0000 (0:00:00.084) 0:03:30.450 ***** 2025-11-01 14:22:14.770749 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:22:14.770760 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:22:14.770771 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:22:14.770781 | orchestrator | 2025-11-01 14:22:14.770792 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-11-01 14:22:14.770802 | orchestrator | Saturday 01 November 2025 14:21:10 +0000 (0:00:30.193) 0:04:00.643 ***** 2025-11-01 14:22:14.770813 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:22:14.770824 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:22:14.770834 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:22:14.770845 | orchestrator | 2025-11-01 14:22:14.770855 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:22:14.770867 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-01 14:22:14.770879 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-11-01 14:22:14.770890 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-11-01 14:22:14.770901 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-01 14:22:14.770911 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-01 14:22:14.770922 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-01 14:22:14.770932 | orchestrator | 2025-11-01 14:22:14.770943 | orchestrator | 2025-11-01 14:22:14.770954 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:22:14.770965 | orchestrator | Saturday 01 November 2025 14:22:12 +0000 (0:01:02.011) 0:05:02.656 ***** 2025-11-01 14:22:14.770975 | orchestrator | =============================================================================== 2025-11-01 14:22:14.770986 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 62.01s 2025-11-01 14:22:14.770997 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 47.65s 2025-11-01 14:22:14.771008 | orchestrator | neutron : Restart neutron-server container ----------------------------- 30.19s 2025-11-01 14:22:14.771018 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 8.44s 2025-11-01 14:22:14.771029 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.78s 2025-11-01 14:22:14.771040 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.04s 2025-11-01 14:22:14.771050 | orchestrator | neutron : Copying over config.json files for services ------------------- 6.95s 2025-11-01 14:22:14.771061 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.86s 2025-11-01 14:22:14.771071 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.43s 2025-11-01 14:22:14.771082 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 4.06s 2025-11-01 14:22:14.771092 | orchestrator | neutron : Copying over sriov_agent.ini ---------------------------------- 4.02s 2025-11-01 14:22:14.771103 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.99s 2025-11-01 14:22:14.771121 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.83s 2025-11-01 14:22:14.771131 | orchestrator | neutron : Copying over dhcp_agent.ini ----------------------------------- 3.73s 2025-11-01 14:22:14.771142 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.71s 2025-11-01 14:22:14.771152 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.59s 2025-11-01 14:22:14.771163 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.57s 2025-11-01 14:22:14.771174 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.43s 2025-11-01 14:22:14.771184 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.41s 2025-11-01 14:22:14.771195 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.37s 2025-11-01 14:22:14.771206 | orchestrator | 2025-11-01 14:22:14 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:22:14.771216 | orchestrator | 2025-11-01 14:22:14 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:22:14.771227 | orchestrator | 2025-11-01 14:22:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:22:17.811737 | orchestrator | 2025-11-01 14:22:17 | INFO  | Task f5ad95fc-ae04-4476-b423-79ffffcc9243 is in state STARTED 2025-11-01 14:22:17.813062 | orchestrator | 2025-11-01 14:22:17 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:22:17.814950 | orchestrator | 2025-11-01 14:22:17 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:22:17.817274 | orchestrator | 2025-11-01 14:22:17 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:22:17.817815 | orchestrator | 2025-11-01 14:22:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:22:20.862303 | orchestrator | 2025-11-01 14:22:20 | INFO  | Task f5ad95fc-ae04-4476-b423-79ffffcc9243 is in state STARTED 2025-11-01 14:22:20.863294 | orchestrator | 2025-11-01 14:22:20 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:22:20.866000 | orchestrator | 2025-11-01 14:22:20 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:22:20.868715 | orchestrator | 2025-11-01 14:22:20 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:22:20.869000 | orchestrator | 2025-11-01 14:22:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:22:23.914554 | orchestrator | 2025-11-01 14:22:23 | INFO  | Task f5ad95fc-ae04-4476-b423-79ffffcc9243 is in state STARTED 2025-11-01 14:22:23.916394 | orchestrator | 2025-11-01 14:22:23 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:22:23.917978 | orchestrator | 2025-11-01 14:22:23 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:22:23.919251 | orchestrator | 2025-11-01 14:22:23 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:22:23.919561 | orchestrator | 2025-11-01 14:22:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:22:26.961217 | orchestrator | 2025-11-01 14:22:26 | INFO  | Task f5ad95fc-ae04-4476-b423-79ffffcc9243 is in state STARTED 2025-11-01 14:22:26.961745 | orchestrator | 2025-11-01 14:22:26 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:22:26.963003 | orchestrator | 2025-11-01 14:22:26 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:22:26.964007 | orchestrator | 2025-11-01 14:22:26 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:22:26.964027 | orchestrator | 2025-11-01 14:22:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:22:29.999482 | orchestrator | 2025-11-01 14:22:29 | INFO  | Task f5ad95fc-ae04-4476-b423-79ffffcc9243 is in state STARTED 2025-11-01 14:22:30.000845 | orchestrator | 2025-11-01 14:22:29 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:22:30.004614 | orchestrator | 2025-11-01 14:22:30 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:22:30.006153 | orchestrator | 2025-11-01 14:22:30 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:22:30.006297 | orchestrator | 2025-11-01 14:22:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:22:33.058833 | orchestrator | 2025-11-01 14:22:33 | INFO  | Task f5ad95fc-ae04-4476-b423-79ffffcc9243 is in state STARTED 2025-11-01 14:22:33.058909 | orchestrator | 2025-11-01 14:22:33 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:22:33.058922 | orchestrator | 2025-11-01 14:22:33 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:22:33.058934 | orchestrator | 2025-11-01 14:22:33 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:22:33.058945 | orchestrator | 2025-11-01 14:22:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:22:36.104732 | orchestrator | 2025-11-01 14:22:36 | INFO  | Task f5ad95fc-ae04-4476-b423-79ffffcc9243 is in state STARTED 2025-11-01 14:22:36.105292 | orchestrator | 2025-11-01 14:22:36 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:22:36.106786 | orchestrator | 2025-11-01 14:22:36 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:22:36.107860 | orchestrator | 2025-11-01 14:22:36 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:22:36.108041 | orchestrator | 2025-11-01 14:22:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:22:39.157566 | orchestrator | 2025-11-01 14:22:39 | INFO  | Task f5ad95fc-ae04-4476-b423-79ffffcc9243 is in state STARTED 2025-11-01 14:22:39.158887 | orchestrator | 2025-11-01 14:22:39 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:22:39.160298 | orchestrator | 2025-11-01 14:22:39 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:22:39.161309 | orchestrator | 2025-11-01 14:22:39 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:22:39.161543 | orchestrator | 2025-11-01 14:22:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:22:42.217263 | orchestrator | 2025-11-01 14:22:42 | INFO  | Task f5ad95fc-ae04-4476-b423-79ffffcc9243 is in state STARTED 2025-11-01 14:22:42.217802 | orchestrator | 2025-11-01 14:22:42 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:22:42.219395 | orchestrator | 2025-11-01 14:22:42 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:22:42.220990 | orchestrator | 2025-11-01 14:22:42 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:22:42.221013 | orchestrator | 2025-11-01 14:22:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:22:45.285233 | orchestrator | 2025-11-01 14:22:45 | INFO  | Task f5ad95fc-ae04-4476-b423-79ffffcc9243 is in state STARTED 2025-11-01 14:22:45.285333 | orchestrator | 2025-11-01 14:22:45 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:22:45.285346 | orchestrator | 2025-11-01 14:22:45 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:22:45.285384 | orchestrator | 2025-11-01 14:22:45 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:22:45.285394 | orchestrator | 2025-11-01 14:22:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:22:48.357929 | orchestrator | 2025-11-01 14:22:48 | INFO  | Task f5ad95fc-ae04-4476-b423-79ffffcc9243 is in state STARTED 2025-11-01 14:22:48.363124 | orchestrator | 2025-11-01 14:22:48 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:22:48.367158 | orchestrator | 2025-11-01 14:22:48 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:22:48.371527 | orchestrator | 2025-11-01 14:22:48 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:22:48.371684 | orchestrator | 2025-11-01 14:22:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:22:51.439235 | orchestrator | 2025-11-01 14:22:51 | INFO  | Task f5ad95fc-ae04-4476-b423-79ffffcc9243 is in state STARTED 2025-11-01 14:22:51.442972 | orchestrator | 2025-11-01 14:22:51 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:22:51.446105 | orchestrator | 2025-11-01 14:22:51 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:22:51.448793 | orchestrator | 2025-11-01 14:22:51 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:22:51.448825 | orchestrator | 2025-11-01 14:22:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:22:54.492765 | orchestrator | 2025-11-01 14:22:54 | INFO  | Task f5ad95fc-ae04-4476-b423-79ffffcc9243 is in state STARTED 2025-11-01 14:22:54.494660 | orchestrator | 2025-11-01 14:22:54 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:22:54.498148 | orchestrator | 2025-11-01 14:22:54 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state STARTED 2025-11-01 14:22:54.498630 | orchestrator | 2025-11-01 14:22:54 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:22:54.498965 | orchestrator | 2025-11-01 14:22:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:22:57.547462 | orchestrator | 2025-11-01 14:22:57 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:22:57.549544 | orchestrator | 2025-11-01 14:22:57 | INFO  | Task f5ad95fc-ae04-4476-b423-79ffffcc9243 is in state SUCCESS 2025-11-01 14:22:57.551697 | orchestrator | 2025-11-01 14:22:57 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:22:57.553676 | orchestrator | 2025-11-01 14:22:57 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:22:57.556186 | orchestrator | 2025-11-01 14:22:57 | INFO  | Task 5abab8b0-fcad-4d57-af08-258ad364560e is in state SUCCESS 2025-11-01 14:22:57.556617 | orchestrator | 2025-11-01 14:22:57.556707 | orchestrator | 2025-11-01 14:22:57.556731 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 14:22:57.556743 | orchestrator | 2025-11-01 14:22:57.556754 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 14:22:57.556790 | orchestrator | Saturday 01 November 2025 14:22:17 +0000 (0:00:00.305) 0:00:00.305 ***** 2025-11-01 14:22:57.556814 | orchestrator | ok: [testbed-manager] 2025-11-01 14:22:57.556843 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:22:57.556855 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:22:57.556865 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:22:57.556876 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:22:57.556887 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:22:57.556897 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:22:57.556908 | orchestrator | 2025-11-01 14:22:57.556938 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 14:22:57.556949 | orchestrator | Saturday 01 November 2025 14:22:18 +0000 (0:00:00.938) 0:00:01.244 ***** 2025-11-01 14:22:57.556960 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-11-01 14:22:57.557119 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-11-01 14:22:57.557131 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-11-01 14:22:57.557142 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-11-01 14:22:57.557152 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-11-01 14:22:57.557163 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-11-01 14:22:57.557173 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-11-01 14:22:57.557195 | orchestrator | 2025-11-01 14:22:57.557207 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-11-01 14:22:57.557259 | orchestrator | 2025-11-01 14:22:57.557283 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-11-01 14:22:57.557305 | orchestrator | Saturday 01 November 2025 14:22:19 +0000 (0:00:00.808) 0:00:02.052 ***** 2025-11-01 14:22:57.557317 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:22:57.557330 | orchestrator | 2025-11-01 14:22:57.557341 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-11-01 14:22:57.557351 | orchestrator | Saturday 01 November 2025 14:22:20 +0000 (0:00:01.677) 0:00:03.729 ***** 2025-11-01 14:22:57.557362 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2025-11-01 14:22:57.557372 | orchestrator | 2025-11-01 14:22:57.557383 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-11-01 14:22:57.557394 | orchestrator | Saturday 01 November 2025 14:22:25 +0000 (0:00:04.199) 0:00:07.929 ***** 2025-11-01 14:22:57.557406 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-11-01 14:22:57.557418 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-11-01 14:22:57.557429 | orchestrator | 2025-11-01 14:22:57.557440 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-11-01 14:22:57.557451 | orchestrator | Saturday 01 November 2025 14:22:32 +0000 (0:00:07.736) 0:00:15.665 ***** 2025-11-01 14:22:57.557461 | orchestrator | ok: [testbed-manager] => (item=service) 2025-11-01 14:22:57.557472 | orchestrator | 2025-11-01 14:22:57.557483 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-11-01 14:22:57.557494 | orchestrator | Saturday 01 November 2025 14:22:36 +0000 (0:00:03.531) 0:00:19.196 ***** 2025-11-01 14:22:57.557565 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-01 14:22:57.557576 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2025-11-01 14:22:57.557586 | orchestrator | 2025-11-01 14:22:57.557597 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-11-01 14:22:57.557607 | orchestrator | Saturday 01 November 2025 14:22:41 +0000 (0:00:04.650) 0:00:23.847 ***** 2025-11-01 14:22:57.557618 | orchestrator | ok: [testbed-manager] => (item=admin) 2025-11-01 14:22:57.557629 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2025-11-01 14:22:57.557639 | orchestrator | 2025-11-01 14:22:57.557650 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-11-01 14:22:57.557661 | orchestrator | Saturday 01 November 2025 14:22:49 +0000 (0:00:08.183) 0:00:32.030 ***** 2025-11-01 14:22:57.557672 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2025-11-01 14:22:57.557682 | orchestrator | 2025-11-01 14:22:57.557693 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:22:57.557713 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:22:57.557725 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:22:57.557736 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:22:57.557746 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:22:57.557757 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:22:57.557846 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:22:57.557873 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:22:57.557885 | orchestrator | 2025-11-01 14:22:57.557920 | orchestrator | 2025-11-01 14:22:57.557933 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:22:57.557962 | orchestrator | Saturday 01 November 2025 14:22:54 +0000 (0:00:04.830) 0:00:36.861 ***** 2025-11-01 14:22:57.557974 | orchestrator | =============================================================================== 2025-11-01 14:22:57.557985 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 8.18s 2025-11-01 14:22:57.557995 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 7.74s 2025-11-01 14:22:57.558006 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.83s 2025-11-01 14:22:57.558069 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.65s 2025-11-01 14:22:57.558084 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.20s 2025-11-01 14:22:57.558095 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.53s 2025-11-01 14:22:57.558105 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.68s 2025-11-01 14:22:57.558116 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.94s 2025-11-01 14:22:57.558127 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.81s 2025-11-01 14:22:57.558138 | orchestrator | 2025-11-01 14:22:57.558774 | orchestrator | 2025-11-01 14:22:57.558870 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 14:22:57.558886 | orchestrator | 2025-11-01 14:22:57.558899 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 14:22:57.558910 | orchestrator | Saturday 01 November 2025 14:20:44 +0000 (0:00:00.286) 0:00:00.286 ***** 2025-11-01 14:22:57.558921 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:22:57.558933 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:22:57.558944 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:22:57.558955 | orchestrator | 2025-11-01 14:22:57.558966 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 14:22:57.558976 | orchestrator | Saturday 01 November 2025 14:20:44 +0000 (0:00:00.355) 0:00:00.641 ***** 2025-11-01 14:22:57.558987 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-11-01 14:22:57.558999 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-11-01 14:22:57.559009 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-11-01 14:22:57.559020 | orchestrator | 2025-11-01 14:22:57.559031 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-11-01 14:22:57.559042 | orchestrator | 2025-11-01 14:22:57.559053 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-11-01 14:22:57.559064 | orchestrator | Saturday 01 November 2025 14:20:45 +0000 (0:00:00.716) 0:00:01.358 ***** 2025-11-01 14:22:57.559075 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:22:57.559112 | orchestrator | 2025-11-01 14:22:57.559123 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-11-01 14:22:57.559134 | orchestrator | Saturday 01 November 2025 14:20:46 +0000 (0:00:01.171) 0:00:02.529 ***** 2025-11-01 14:22:57.559145 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-11-01 14:22:57.559156 | orchestrator | 2025-11-01 14:22:57.559166 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-11-01 14:22:57.559177 | orchestrator | Saturday 01 November 2025 14:20:50 +0000 (0:00:03.826) 0:00:06.355 ***** 2025-11-01 14:22:57.559188 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-11-01 14:22:57.559199 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-11-01 14:22:57.559209 | orchestrator | 2025-11-01 14:22:57.559220 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-11-01 14:22:57.559230 | orchestrator | Saturday 01 November 2025 14:20:57 +0000 (0:00:07.261) 0:00:13.616 ***** 2025-11-01 14:22:57.559241 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-01 14:22:57.559252 | orchestrator | 2025-11-01 14:22:57.559263 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-11-01 14:22:57.559273 | orchestrator | Saturday 01 November 2025 14:21:01 +0000 (0:00:03.584) 0:00:17.201 ***** 2025-11-01 14:22:57.559284 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-01 14:22:57.559295 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-11-01 14:22:57.559307 | orchestrator | 2025-11-01 14:22:57.559319 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-11-01 14:22:57.559331 | orchestrator | Saturday 01 November 2025 14:21:05 +0000 (0:00:04.458) 0:00:21.660 ***** 2025-11-01 14:22:57.559343 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-01 14:22:57.559355 | orchestrator | 2025-11-01 14:22:57.559367 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-11-01 14:22:57.559379 | orchestrator | Saturday 01 November 2025 14:21:09 +0000 (0:00:03.689) 0:00:25.350 ***** 2025-11-01 14:22:57.559391 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-11-01 14:22:57.559403 | orchestrator | 2025-11-01 14:22:57.559415 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-11-01 14:22:57.559427 | orchestrator | Saturday 01 November 2025 14:21:14 +0000 (0:00:04.556) 0:00:29.907 ***** 2025-11-01 14:22:57.559439 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:22:57.559451 | orchestrator | 2025-11-01 14:22:57.559463 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-11-01 14:22:57.559474 | orchestrator | Saturday 01 November 2025 14:21:17 +0000 (0:00:03.822) 0:00:33.730 ***** 2025-11-01 14:22:57.559486 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:22:57.559521 | orchestrator | 2025-11-01 14:22:57.559533 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-11-01 14:22:57.559545 | orchestrator | Saturday 01 November 2025 14:21:22 +0000 (0:00:04.565) 0:00:38.295 ***** 2025-11-01 14:22:57.559557 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:22:57.559569 | orchestrator | 2025-11-01 14:22:57.559595 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-11-01 14:22:57.559608 | orchestrator | Saturday 01 November 2025 14:21:26 +0000 (0:00:04.189) 0:00:42.485 ***** 2025-11-01 14:22:57.559641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 14:22:57.559671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 14:22:57.559683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 14:22:57.559696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:22:57.559714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:22:57.559733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:22:57.559752 | orchestrator | 2025-11-01 14:22:57.559763 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-11-01 14:22:57.559774 | orchestrator | Saturday 01 November 2025 14:21:28 +0000 (0:00:01.539) 0:00:44.025 ***** 2025-11-01 14:22:57.559784 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:57.559795 | orchestrator | 2025-11-01 14:22:57.559806 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-11-01 14:22:57.559817 | orchestrator | Saturday 01 November 2025 14:21:28 +0000 (0:00:00.145) 0:00:44.170 ***** 2025-11-01 14:22:57.559827 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:57.559838 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:57.559848 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:57.559858 | orchestrator | 2025-11-01 14:22:57.559870 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-11-01 14:22:57.559880 | orchestrator | Saturday 01 November 2025 14:21:29 +0000 (0:00:00.872) 0:00:45.043 ***** 2025-11-01 14:22:57.559891 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-01 14:22:57.559902 | orchestrator | 2025-11-01 14:22:57.559912 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-11-01 14:22:57.559923 | orchestrator | Saturday 01 November 2025 14:21:31 +0000 (0:00:02.704) 0:00:47.748 ***** 2025-11-01 14:22:57.559934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 14:22:57.559946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 14:22:57.559974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 14:22:57.560001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:22:57.560013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:22:57.560024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:22:57.560035 | orchestrator | 2025-11-01 14:22:57.560046 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-11-01 14:22:57.560057 | orchestrator | Saturday 01 November 2025 14:21:36 +0000 (0:00:04.415) 0:00:52.164 ***** 2025-11-01 14:22:57.560068 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:22:57.560079 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:22:57.560089 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:22:57.560100 | orchestrator | 2025-11-01 14:22:57.560111 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-11-01 14:22:57.560121 | orchestrator | Saturday 01 November 2025 14:21:37 +0000 (0:00:00.928) 0:00:53.092 ***** 2025-11-01 14:22:57.560132 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:22:57.560143 | orchestrator | 2025-11-01 14:22:57.560153 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-11-01 14:22:57.560164 | orchestrator | Saturday 01 November 2025 14:21:39 +0000 (0:00:02.049) 0:00:55.142 ***** 2025-11-01 14:22:57.560181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 14:22:57.560204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 14:22:57.560216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 14:22:57.560227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:22:57.560239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:22:57.560256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:22:57.560267 | orchestrator | 2025-11-01 14:22:57.560283 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-11-01 14:22:57.560294 | orchestrator | Saturday 01 November 2025 14:21:43 +0000 (0:00:04.109) 0:00:59.251 ***** 2025-11-01 14:22:57.560312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-01 14:22:57.560324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-01 14:22:57.560335 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:57.560346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-01 14:22:57.560358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-01 14:22:57.560375 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:57.560391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-01 14:22:57.560410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-01 14:22:57.560421 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:57.560432 | orchestrator | 2025-11-01 14:22:57.560443 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-11-01 14:22:57.560454 | orchestrator | Saturday 01 November 2025 14:21:45 +0000 (0:00:02.217) 0:01:01.468 ***** 2025-11-01 14:22:57.560465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-01 14:22:57.560476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-01 14:22:57.560494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-01 14:22:57.560532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-01 14:22:57.560545 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:57.560556 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:57.560575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-01 14:22:57.560587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-01 14:22:57.560598 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:57.560609 | orchestrator | 2025-11-01 14:22:57.560619 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-11-01 14:22:57.560630 | orchestrator | Saturday 01 November 2025 14:21:47 +0000 (0:00:02.290) 0:01:03.759 ***** 2025-11-01 14:22:57.560641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 14:22:57.560664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 14:22:57.560682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 14:22:57.560693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:22:57.560704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:22:57.560721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:22:57.560732 | orchestrator | 2025-11-01 14:22:57.560743 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-11-01 14:22:57.560754 | orchestrator | Saturday 01 November 2025 14:21:50 +0000 (0:00:02.798) 0:01:06.557 ***** 2025-11-01 14:22:57.560770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 14:22:57.560787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 14:22:57.560799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 14:22:57.560811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:22:57.560828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:22:57.560844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:22:57.560855 | orchestrator | 2025-11-01 14:22:57.560866 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-11-01 14:22:57.560877 | orchestrator | Saturday 01 November 2025 14:21:56 +0000 (0:00:05.436) 0:01:11.994 ***** 2025-11-01 14:22:57.560894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-01 14:22:57.560906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-01 14:22:57.560917 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:57.560942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-01 14:22:57.560961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-01 14:22:57.560972 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:57.560987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-01 14:22:57.561005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-01 14:22:57.561016 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:57.561027 | orchestrator | 2025-11-01 14:22:57.561038 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-11-01 14:22:57.561049 | orchestrator | Saturday 01 November 2025 14:21:56 +0000 (0:00:00.679) 0:01:12.674 ***** 2025-11-01 14:22:57.561060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 14:22:57.561079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 14:22:57.561090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 14:22:57.561106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:22:57.561124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:22:57.561136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:22:57.561153 | orchestrator | 2025-11-01 14:22:57.561164 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-11-01 14:22:57.561175 | orchestrator | Saturday 01 November 2025 14:21:59 +0000 (0:00:02.574) 0:01:15.248 ***** 2025-11-01 14:22:57.561186 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:22:57.561197 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:22:57.561207 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:22:57.561218 | orchestrator | 2025-11-01 14:22:57.561228 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-11-01 14:22:57.561239 | orchestrator | Saturday 01 November 2025 14:21:59 +0000 (0:00:00.307) 0:01:15.556 ***** 2025-11-01 14:22:57.561250 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:22:57.561260 | orchestrator | 2025-11-01 14:22:57.561271 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-11-01 14:22:57.561282 | orchestrator | Saturday 01 November 2025 14:22:02 +0000 (0:00:02.378) 0:01:17.935 ***** 2025-11-01 14:22:57.561292 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:22:57.561303 | orchestrator | 2025-11-01 14:22:57.561314 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-11-01 14:22:57.561325 | orchestrator | Saturday 01 November 2025 14:22:04 +0000 (0:00:02.615) 0:01:20.550 ***** 2025-11-01 14:22:57.561335 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:22:57.561346 | orchestrator | 2025-11-01 14:22:57.561357 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-11-01 14:22:57.561367 | orchestrator | Saturday 01 November 2025 14:22:22 +0000 (0:00:18.165) 0:01:38.716 ***** 2025-11-01 14:22:57.561378 | orchestrator | 2025-11-01 14:22:57.561388 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-11-01 14:22:57.561399 | orchestrator | Saturday 01 November 2025 14:22:22 +0000 (0:00:00.066) 0:01:38.782 ***** 2025-11-01 14:22:57.561410 | orchestrator | 2025-11-01 14:22:57.561420 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-11-01 14:22:57.561431 | orchestrator | Saturday 01 November 2025 14:22:23 +0000 (0:00:00.166) 0:01:38.949 ***** 2025-11-01 14:22:57.561442 | orchestrator | 2025-11-01 14:22:57.561452 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-11-01 14:22:57.561463 | orchestrator | Saturday 01 November 2025 14:22:23 +0000 (0:00:00.163) 0:01:39.113 ***** 2025-11-01 14:22:57.561474 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:22:57.561484 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:22:57.561495 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:22:57.561525 | orchestrator | 2025-11-01 14:22:57.561536 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-11-01 14:22:57.561547 | orchestrator | Saturday 01 November 2025 14:22:40 +0000 (0:00:17.495) 0:01:56.609 ***** 2025-11-01 14:22:57.561557 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:22:57.561568 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:22:57.561579 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:22:57.561589 | orchestrator | 2025-11-01 14:22:57.561600 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:22:57.561616 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 14:22:57.561627 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-01 14:22:57.561645 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-01 14:22:57.561656 | orchestrator | 2025-11-01 14:22:57.561667 | orchestrator | 2025-11-01 14:22:57.561677 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:22:57.561688 | orchestrator | Saturday 01 November 2025 14:22:54 +0000 (0:00:13.520) 0:02:10.129 ***** 2025-11-01 14:22:57.561699 | orchestrator | =============================================================================== 2025-11-01 14:22:57.561709 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 18.17s 2025-11-01 14:22:57.561725 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 17.50s 2025-11-01 14:22:57.561736 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 13.52s 2025-11-01 14:22:57.561747 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.26s 2025-11-01 14:22:57.561757 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.44s 2025-11-01 14:22:57.561768 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.57s 2025-11-01 14:22:57.561779 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.56s 2025-11-01 14:22:57.561789 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.46s 2025-11-01 14:22:57.561800 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 4.42s 2025-11-01 14:22:57.561810 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 4.19s 2025-11-01 14:22:57.561821 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 4.11s 2025-11-01 14:22:57.561831 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.83s 2025-11-01 14:22:57.561842 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.82s 2025-11-01 14:22:57.561853 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.69s 2025-11-01 14:22:57.561863 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.58s 2025-11-01 14:22:57.561874 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.80s 2025-11-01 14:22:57.561884 | orchestrator | magnum : Check if kubeconfig file is supplied --------------------------- 2.70s 2025-11-01 14:22:57.561895 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.62s 2025-11-01 14:22:57.561906 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.57s 2025-11-01 14:22:57.561916 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.38s 2025-11-01 14:22:57.561927 | orchestrator | 2025-11-01 14:22:57 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:22:57.561938 | orchestrator | 2025-11-01 14:22:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:23:00.617297 | orchestrator | 2025-11-01 14:23:00 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:23:00.617391 | orchestrator | 2025-11-01 14:23:00 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:23:00.617587 | orchestrator | 2025-11-01 14:23:00 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:23:00.620110 | orchestrator | 2025-11-01 14:23:00 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:23:00.620132 | orchestrator | 2025-11-01 14:23:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:23:03.676441 | orchestrator | 2025-11-01 14:23:03 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:23:03.682319 | orchestrator | 2025-11-01 14:23:03 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:23:03.686181 | orchestrator | 2025-11-01 14:23:03 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:23:03.687451 | orchestrator | 2025-11-01 14:23:03 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:23:03.687474 | orchestrator | 2025-11-01 14:23:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:23:06.733314 | orchestrator | 2025-11-01 14:23:06 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:23:06.734426 | orchestrator | 2025-11-01 14:23:06 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:23:06.735720 | orchestrator | 2025-11-01 14:23:06 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:23:06.736401 | orchestrator | 2025-11-01 14:23:06 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:23:06.737006 | orchestrator | 2025-11-01 14:23:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:23:09.782055 | orchestrator | 2025-11-01 14:23:09 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:23:09.783529 | orchestrator | 2025-11-01 14:23:09 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:23:09.784370 | orchestrator | 2025-11-01 14:23:09 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:23:09.785677 | orchestrator | 2025-11-01 14:23:09 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:23:09.785696 | orchestrator | 2025-11-01 14:23:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:23:12.875990 | orchestrator | 2025-11-01 14:23:12 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:23:12.877579 | orchestrator | 2025-11-01 14:23:12 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:23:12.878976 | orchestrator | 2025-11-01 14:23:12 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:23:12.880291 | orchestrator | 2025-11-01 14:23:12 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:23:12.880664 | orchestrator | 2025-11-01 14:23:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:23:15.913927 | orchestrator | 2025-11-01 14:23:15 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:23:15.916358 | orchestrator | 2025-11-01 14:23:15 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:23:15.916386 | orchestrator | 2025-11-01 14:23:15 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:23:15.917034 | orchestrator | 2025-11-01 14:23:15 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:23:15.917298 | orchestrator | 2025-11-01 14:23:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:23:18.959863 | orchestrator | 2025-11-01 14:23:18 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:23:18.959957 | orchestrator | 2025-11-01 14:23:18 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:23:18.959971 | orchestrator | 2025-11-01 14:23:18 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:23:18.963314 | orchestrator | 2025-11-01 14:23:18 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:23:18.963348 | orchestrator | 2025-11-01 14:23:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:23:22.004574 | orchestrator | 2025-11-01 14:23:22 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:23:22.007445 | orchestrator | 2025-11-01 14:23:22 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:23:22.009099 | orchestrator | 2025-11-01 14:23:22 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:23:22.011586 | orchestrator | 2025-11-01 14:23:22 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:23:22.011610 | orchestrator | 2025-11-01 14:23:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:23:25.060899 | orchestrator | 2025-11-01 14:23:25 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:23:25.061787 | orchestrator | 2025-11-01 14:23:25 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:23:25.063174 | orchestrator | 2025-11-01 14:23:25 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:23:25.064664 | orchestrator | 2025-11-01 14:23:25 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:23:25.064686 | orchestrator | 2025-11-01 14:23:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:23:28.110196 | orchestrator | 2025-11-01 14:23:28 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:23:28.112346 | orchestrator | 2025-11-01 14:23:28 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:23:28.113823 | orchestrator | 2025-11-01 14:23:28 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:23:28.115249 | orchestrator | 2025-11-01 14:23:28 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:23:28.115299 | orchestrator | 2025-11-01 14:23:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:23:31.166332 | orchestrator | 2025-11-01 14:23:31 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:23:31.166421 | orchestrator | 2025-11-01 14:23:31 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:23:31.166434 | orchestrator | 2025-11-01 14:23:31 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:23:31.166445 | orchestrator | 2025-11-01 14:23:31 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:23:31.166456 | orchestrator | 2025-11-01 14:23:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:23:34.196156 | orchestrator | 2025-11-01 14:23:34 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:23:34.197765 | orchestrator | 2025-11-01 14:23:34 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:23:34.200791 | orchestrator | 2025-11-01 14:23:34 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:23:34.202351 | orchestrator | 2025-11-01 14:23:34 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:23:34.202872 | orchestrator | 2025-11-01 14:23:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:23:37.240780 | orchestrator | 2025-11-01 14:23:37 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:23:37.242083 | orchestrator | 2025-11-01 14:23:37 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:23:37.243950 | orchestrator | 2025-11-01 14:23:37 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:23:37.244904 | orchestrator | 2025-11-01 14:23:37 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:23:37.244954 | orchestrator | 2025-11-01 14:23:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:23:40.288793 | orchestrator | 2025-11-01 14:23:40 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:23:40.290106 | orchestrator | 2025-11-01 14:23:40 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:23:40.290855 | orchestrator | 2025-11-01 14:23:40 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:23:40.292610 | orchestrator | 2025-11-01 14:23:40 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:23:40.292635 | orchestrator | 2025-11-01 14:23:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:23:43.346277 | orchestrator | 2025-11-01 14:23:43 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:23:43.346377 | orchestrator | 2025-11-01 14:23:43 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:23:43.346393 | orchestrator | 2025-11-01 14:23:43 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:23:43.346405 | orchestrator | 2025-11-01 14:23:43 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:23:43.346416 | orchestrator | 2025-11-01 14:23:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:23:46.416849 | orchestrator | 2025-11-01 14:23:46 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:23:46.419131 | orchestrator | 2025-11-01 14:23:46 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:23:46.420455 | orchestrator | 2025-11-01 14:23:46 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:23:46.422152 | orchestrator | 2025-11-01 14:23:46 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:23:46.423171 | orchestrator | 2025-11-01 14:23:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:23:49.574784 | orchestrator | 2025-11-01 14:23:49 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:23:49.574879 | orchestrator | 2025-11-01 14:23:49 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:23:49.574894 | orchestrator | 2025-11-01 14:23:49 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:23:49.574905 | orchestrator | 2025-11-01 14:23:49 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:23:49.574936 | orchestrator | 2025-11-01 14:23:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:23:52.535176 | orchestrator | 2025-11-01 14:23:52 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:23:52.537317 | orchestrator | 2025-11-01 14:23:52 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:23:52.539784 | orchestrator | 2025-11-01 14:23:52 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:23:52.540965 | orchestrator | 2025-11-01 14:23:52 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:23:52.540990 | orchestrator | 2025-11-01 14:23:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:23:55.587322 | orchestrator | 2025-11-01 14:23:55 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:23:55.589482 | orchestrator | 2025-11-01 14:23:55 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:23:55.591409 | orchestrator | 2025-11-01 14:23:55 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:23:55.592801 | orchestrator | 2025-11-01 14:23:55 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:23:55.593229 | orchestrator | 2025-11-01 14:23:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:23:58.631177 | orchestrator | 2025-11-01 14:23:58 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:23:58.631631 | orchestrator | 2025-11-01 14:23:58 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:23:58.634180 | orchestrator | 2025-11-01 14:23:58 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:23:58.635148 | orchestrator | 2025-11-01 14:23:58 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:23:58.635188 | orchestrator | 2025-11-01 14:23:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:24:01.688080 | orchestrator | 2025-11-01 14:24:01 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:24:01.689427 | orchestrator | 2025-11-01 14:24:01 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:24:01.690288 | orchestrator | 2025-11-01 14:24:01 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:24:01.692266 | orchestrator | 2025-11-01 14:24:01 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:24:01.692290 | orchestrator | 2025-11-01 14:24:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:24:04.849572 | orchestrator | 2025-11-01 14:24:04 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:24:04.850735 | orchestrator | 2025-11-01 14:24:04 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:24:04.852275 | orchestrator | 2025-11-01 14:24:04 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:24:04.854546 | orchestrator | 2025-11-01 14:24:04 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:24:04.854676 | orchestrator | 2025-11-01 14:24:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:24:07.890900 | orchestrator | 2025-11-01 14:24:07 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:24:07.891368 | orchestrator | 2025-11-01 14:24:07 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:24:07.894483 | orchestrator | 2025-11-01 14:24:07 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:24:07.894556 | orchestrator | 2025-11-01 14:24:07 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:24:07.894568 | orchestrator | 2025-11-01 14:24:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:24:10.934441 | orchestrator | 2025-11-01 14:24:10 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:24:10.934986 | orchestrator | 2025-11-01 14:24:10 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:24:10.935866 | orchestrator | 2025-11-01 14:24:10 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:24:10.937238 | orchestrator | 2025-11-01 14:24:10 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:24:10.937262 | orchestrator | 2025-11-01 14:24:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:24:13.979747 | orchestrator | 2025-11-01 14:24:13 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:24:13.980475 | orchestrator | 2025-11-01 14:24:13 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:24:13.981698 | orchestrator | 2025-11-01 14:24:13 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:24:13.984317 | orchestrator | 2025-11-01 14:24:13 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:24:13.984342 | orchestrator | 2025-11-01 14:24:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:24:17.020031 | orchestrator | 2025-11-01 14:24:17 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:24:17.020393 | orchestrator | 2025-11-01 14:24:17 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:24:17.022710 | orchestrator | 2025-11-01 14:24:17 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:24:17.023783 | orchestrator | 2025-11-01 14:24:17 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:24:17.023805 | orchestrator | 2025-11-01 14:24:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:24:20.061776 | orchestrator | 2025-11-01 14:24:20 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:24:20.062515 | orchestrator | 2025-11-01 14:24:20 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:24:20.063338 | orchestrator | 2025-11-01 14:24:20 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:24:20.064302 | orchestrator | 2025-11-01 14:24:20 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:24:20.064443 | orchestrator | 2025-11-01 14:24:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:24:23.124222 | orchestrator | 2025-11-01 14:24:23 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:24:23.132478 | orchestrator | 2025-11-01 14:24:23 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:24:23.141786 | orchestrator | 2025-11-01 14:24:23 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:24:23.148522 | orchestrator | 2025-11-01 14:24:23 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:24:23.148545 | orchestrator | 2025-11-01 14:24:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:24:26.186742 | orchestrator | 2025-11-01 14:24:26 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:24:26.188652 | orchestrator | 2025-11-01 14:24:26 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:24:26.191026 | orchestrator | 2025-11-01 14:24:26 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:24:26.192153 | orchestrator | 2025-11-01 14:24:26 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:24:26.192342 | orchestrator | 2025-11-01 14:24:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:24:29.227281 | orchestrator | 2025-11-01 14:24:29 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:24:29.230126 | orchestrator | 2025-11-01 14:24:29 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:24:29.230897 | orchestrator | 2025-11-01 14:24:29 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:24:29.232121 | orchestrator | 2025-11-01 14:24:29 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:24:29.232167 | orchestrator | 2025-11-01 14:24:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:24:32.258088 | orchestrator | 2025-11-01 14:24:32 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:24:32.258744 | orchestrator | 2025-11-01 14:24:32 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:24:32.259427 | orchestrator | 2025-11-01 14:24:32 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:24:32.260244 | orchestrator | 2025-11-01 14:24:32 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:24:32.260272 | orchestrator | 2025-11-01 14:24:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:24:35.376085 | orchestrator | 2025-11-01 14:24:35 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:24:35.376184 | orchestrator | 2025-11-01 14:24:35 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:24:35.376198 | orchestrator | 2025-11-01 14:24:35 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:24:35.376209 | orchestrator | 2025-11-01 14:24:35 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:24:35.376220 | orchestrator | 2025-11-01 14:24:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:24:38.403703 | orchestrator | 2025-11-01 14:24:38 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:24:38.403810 | orchestrator | 2025-11-01 14:24:38 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:24:38.404557 | orchestrator | 2025-11-01 14:24:38 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state STARTED 2025-11-01 14:24:38.405262 | orchestrator | 2025-11-01 14:24:38 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:24:38.405284 | orchestrator | 2025-11-01 14:24:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:24:41.444866 | orchestrator | 2025-11-01 14:24:41 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:24:41.445025 | orchestrator | 2025-11-01 14:24:41 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:24:41.447812 | orchestrator | 2025-11-01 14:24:41 | INFO  | Task 9ffbc215-6eca-4421-8f65-4cb7801c29e0 is in state SUCCESS 2025-11-01 14:24:41.449827 | orchestrator | 2025-11-01 14:24:41.449854 | orchestrator | 2025-11-01 14:24:41.449864 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 14:24:41.449875 | orchestrator | 2025-11-01 14:24:41.449884 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 14:24:41.449894 | orchestrator | Saturday 01 November 2025 14:20:58 +0000 (0:00:00.309) 0:00:00.309 ***** 2025-11-01 14:24:41.449902 | orchestrator | ok: [testbed-manager] 2025-11-01 14:24:41.449912 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:24:41.449921 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:24:41.449930 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:24:41.449938 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:24:41.449947 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:24:41.449955 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:24:41.449964 | orchestrator | 2025-11-01 14:24:41.449973 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 14:24:41.449981 | orchestrator | Saturday 01 November 2025 14:20:59 +0000 (0:00:00.943) 0:00:01.253 ***** 2025-11-01 14:24:41.449991 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-11-01 14:24:41.449999 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-11-01 14:24:41.450008 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-11-01 14:24:41.450076 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-11-01 14:24:41.450086 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-11-01 14:24:41.450094 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-11-01 14:24:41.450103 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-11-01 14:24:41.450111 | orchestrator | 2025-11-01 14:24:41.450120 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-11-01 14:24:41.450128 | orchestrator | 2025-11-01 14:24:41.450137 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-11-01 14:24:41.450145 | orchestrator | Saturday 01 November 2025 14:21:00 +0000 (0:00:00.816) 0:00:02.070 ***** 2025-11-01 14:24:41.450155 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:24:41.450165 | orchestrator | 2025-11-01 14:24:41.450174 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-11-01 14:24:41.450182 | orchestrator | Saturday 01 November 2025 14:21:02 +0000 (0:00:01.721) 0:00:03.792 ***** 2025-11-01 14:24:41.450223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 14:24:41.450237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 14:24:41.450298 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-11-01 14:24:41.450310 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 14:24:41.450331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.450348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.450357 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 14:24:41.450366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 14:24:41.450400 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 14:24:41.450416 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.450571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.450587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.450606 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.450623 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 14:24:41.450633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.450644 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.450655 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.450670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.450682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.450699 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-11-01 14:24:41.450718 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.450729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.450739 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.450749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.450763 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.450773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.450783 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.450803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.450838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.450848 | orchestrator | 2025-11-01 14:24:41.450857 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-11-01 14:24:41.450866 | orchestrator | Saturday 01 November 2025 14:21:05 +0000 (0:00:03.130) 0:00:06.922 ***** 2025-11-01 14:24:41.450875 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:24:41.450884 | orchestrator | 2025-11-01 14:24:41.450892 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-11-01 14:24:41.450922 | orchestrator | Saturday 01 November 2025 14:21:06 +0000 (0:00:01.458) 0:00:08.381 ***** 2025-11-01 14:24:41.450933 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-11-01 14:24:41.450947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 14:24:41.450956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 14:24:41.450965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 14:24:41.450986 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 14:24:41.450995 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 14:24:41.451004 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 14:24:41.451013 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 14:24:41.451022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.451032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.451079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.451095 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.451109 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.451119 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.451128 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.451197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.451207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.451221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.451230 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.451251 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.451265 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.451275 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-11-01 14:24:41.451285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.451294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.451307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.451321 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.451331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.452044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.452083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.452093 | orchestrator | 2025-11-01 14:24:41.452122 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-11-01 14:24:41.452167 | orchestrator | Saturday 01 November 2025 14:21:13 +0000 (0:00:06.268) 0:00:14.650 ***** 2025-11-01 14:24:41.452178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 14:24:41.452187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:24:41.452196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:24:41.452235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 14:24:41.452285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 14:24:41.452295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:24:41.452311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:24:41.452321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:24:41.452330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 14:24:41.452339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:24:41.452348 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:24:41.452357 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:24:41.452372 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-11-01 14:24:41.452387 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 14:24:41.452397 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 14:24:41.452411 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-11-01 14:24:41.452422 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:24:41.452431 | orchestrator | skipping: [testbed-manager] 2025-11-01 14:24:41.452440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 14:24:41.452450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:24:41.452469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:24:41.452478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 14:24:41.452508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:24:41.452517 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:24:41.452532 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 14:24:41.452542 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 14:24:41.452551 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-01 14:24:41.452560 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:24:41.452569 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 14:24:41.452583 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 14:24:41.452597 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-01 14:24:41.452606 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:24:41.452615 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 14:24:41.452624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 14:24:41.452639 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-01 14:24:41.452650 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:24:41.452660 | orchestrator | 2025-11-01 14:24:41.452670 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-11-01 14:24:41.452705 | orchestrator | Saturday 01 November 2025 14:21:16 +0000 (0:00:02.735) 0:00:17.386 ***** 2025-11-01 14:24:41.452716 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-11-01 14:24:41.452760 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 14:24:41.452770 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 14:24:41.452785 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-11-01 14:24:41.452796 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:24:41.452806 | orchestrator | skipping: [testbed-manager] 2025-11-01 14:24:41.452839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 14:24:41.452850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:24:41.452860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:24:41.452875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 14:24:41.452886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:24:41.452895 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:24:41.452905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 14:24:41.452915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:24:41.452926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:24:41.452940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 14:24:41.452950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:24:41.452965 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:24:41.452974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 14:24:41.453014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:24:41.453024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:24:41.453038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 14:24:41.453047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 14:24:41.453056 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:24:41.453069 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 14:24:41.453079 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 14:24:41.453093 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-01 14:24:41.453102 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:24:41.453110 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 14:24:41.453119 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 14:24:41.453132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-01 14:24:41.453141 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:24:41.453150 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 14:24:41.453159 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 14:24:41.453174 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-01 14:24:41.453183 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:24:41.453192 | orchestrator | 2025-11-01 14:24:41.453201 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-11-01 14:24:41.453215 | orchestrator | Saturday 01 November 2025 14:21:19 +0000 (0:00:03.320) 0:00:20.706 ***** 2025-11-01 14:24:41.453224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 14:24:41.453233 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-11-01 14:24:41.453242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 14:24:41.453255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 14:24:41.453265 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 14:24:41.453273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.453288 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 14:24:41.453303 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 14:24:41.453312 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 14:24:41.453321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.453330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.453339 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.453352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.453362 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.453375 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.453390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.453399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.453408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.453416 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.453429 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.453438 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.453447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.453466 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.453476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.453528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.453539 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-11-01 14:24:41.453554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.453564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.453572 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.453589 | orchestrator | 2025-11-01 14:24:41.453598 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-11-01 14:24:41.453607 | orchestrator | Saturday 01 November 2025 14:21:26 +0000 (0:00:07.580) 0:00:28.287 ***** 2025-11-01 14:24:41.453615 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-01 14:24:41.453624 | orchestrator | 2025-11-01 14:24:41.453633 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-11-01 14:24:41.453646 | orchestrator | Saturday 01 November 2025 14:21:28 +0000 (0:00:01.442) 0:00:29.729 ***** 2025-11-01 14:24:41.453656 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1081022, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3797204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.453666 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1081022, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3797204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.453675 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1081022, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3797204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.453684 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1081022, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3797204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 14:24:41.453697 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1081022, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3797204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.453707 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1081050, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3957207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.453727 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1081022, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3797204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.453737 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1081050, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3957207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.453746 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1081016, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3786874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.453755 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1081050, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3957207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.453764 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1081022, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3797204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.453779 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1081037, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.387202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.453794 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1081050, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3957207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.453807 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1081016, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3786874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.453816 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1081050, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3957207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.453825 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1081016, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3786874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.453834 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1081050, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3957207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.453843 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1081013, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3767204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.453856 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1081050, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3957207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 14:24:41.453870 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1081016, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3786874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.453883 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1081016, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3786874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.453893 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1081037, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.387202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.453902 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1081037, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.387202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.453910 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1081037, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.387202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.453919 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1081016, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3786874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.453932 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1081037, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.387202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.453947 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1081023, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.381243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.453956 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1081013, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3767204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454353 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1081016, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3786874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 14:24:41.454370 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1081037, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.387202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454380 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1081013, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3767204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454389 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1081013, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3767204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454403 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1081036, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3858707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454420 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1081023, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.381243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454429 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1081023, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.381243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454443 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1081013, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3767204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454453 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1081023, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.381243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454462 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1081013, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3767204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454471 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1081025, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3816264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454538 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1081023, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.381243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454550 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1081023, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.381243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454559 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1081036, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3858707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454573 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1081036, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3858707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454582 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1081037, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.387202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 14:24:41.454590 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1081036, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3858707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454598 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1081019, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3787205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454615 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1081036, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3858707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454624 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1081036, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3858707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454632 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1081025, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3816264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454644 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1081025, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3816264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454652 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1081025, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3816264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454660 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1081025, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3816264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454669 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1081025, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3816264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454686 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1081048, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3947206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454695 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1081019, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3787205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454703 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1081019, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3787205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454716 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1081019, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3787205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454725 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1081019, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3787205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454733 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1081019, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3787205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454746 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1081013, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3767204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 14:24:41.454761 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1081048, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3947206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454770 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1081010, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.375345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454778 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1081048, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3947206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454790 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1081048, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3947206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454799 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1081048, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3947206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454807 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1081048, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3947206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454820 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1081010, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.375345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454832 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1081010, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.375345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454841 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1081010, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.375345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454849 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1081010, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.375345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454862 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1081073, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.404268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454870 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1081010, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.375345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454879 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1081040, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3937206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454892 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1081073, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.404268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454904 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1081073, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.404268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454912 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1081073, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.404268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454920 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1081073, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.404268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454933 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1081040, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3937206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454941 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1081023, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.381243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 14:24:41.454949 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1081073, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.404268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454966 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1081040, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3937206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454976 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1081040, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3937206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454989 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1081015, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3783586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.454999 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1081015, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3783586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455013 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1081040, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3937206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455022 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1081015, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3783586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455036 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1081040, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3937206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455046 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1081011, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3767204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455054 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1081015, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3783586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455067 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1081011, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3767204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455077 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1081015, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3783586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455091 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1081011, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3767204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455100 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1081015, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3783586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455114 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1081011, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3767204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455123 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1081034, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3858707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455133 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1081034, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3858707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455145 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1081036, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3858707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 14:24:41.455155 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1081034, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3858707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455168 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1081011, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3767204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455177 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1081034, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3858707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455191 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1081029, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3830137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455200 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1081011, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3767204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455209 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1081029, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3830137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455222 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1081029, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3830137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455231 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1081034, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3858707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455244 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1081029, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3830137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455258 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1081071, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.4039626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455267 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:24:41.455276 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1081071, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.4039626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455285 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:24:41.455294 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1081071, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.4039626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455302 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:24:41.455312 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1081034, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3858707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455324 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1081025, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3816264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 14:24:41.455334 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1081029, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3830137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455346 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1081071, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.4039626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455361 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:24:41.455369 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1081029, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3830137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455377 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1081071, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.4039626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455385 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:24:41.455393 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1081071, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.4039626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 14:24:41.455401 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:24:41.455413 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1081019, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3787205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 14:24:41.455421 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1081048, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3947206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 14:24:41.455430 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1081010, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.375345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 14:24:41.455446 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1081073, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.404268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 14:24:41.455455 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1081040, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3937206, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 14:24:41.455463 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1081015, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3783586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 14:24:41.455471 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1081011, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3767204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 14:24:41.455483 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1081034, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3858707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 14:24:41.455504 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1081029, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3830137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 14:24:41.455512 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1081071, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.4039626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 14:24:41.455525 | orchestrator | 2025-11-01 14:24:41.455533 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-11-01 14:24:41.455541 | orchestrator | Saturday 01 November 2025 14:22:01 +0000 (0:00:33.569) 0:01:03.299 ***** 2025-11-01 14:24:41.455549 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-01 14:24:41.455558 | orchestrator | 2025-11-01 14:24:41.455569 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-11-01 14:24:41.455577 | orchestrator | Saturday 01 November 2025 14:22:02 +0000 (0:00:00.809) 0:01:04.108 ***** 2025-11-01 14:24:41.455585 | orchestrator | [WARNING]: Skipped 2025-11-01 14:24:41.455593 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-01 14:24:41.455601 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-11-01 14:24:41.455609 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-01 14:24:41.455617 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-11-01 14:24:41.455625 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-01 14:24:41.455632 | orchestrator | [WARNING]: Skipped 2025-11-01 14:24:41.455640 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-01 14:24:41.455648 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-11-01 14:24:41.455656 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-01 14:24:41.455663 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-11-01 14:24:41.455671 | orchestrator | [WARNING]: Skipped 2025-11-01 14:24:41.455679 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-01 14:24:41.455687 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-11-01 14:24:41.455694 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-01 14:24:41.455702 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-11-01 14:24:41.455710 | orchestrator | [WARNING]: Skipped 2025-11-01 14:24:41.455718 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-01 14:24:41.455725 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-11-01 14:24:41.455733 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-01 14:24:41.455741 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-11-01 14:24:41.455749 | orchestrator | [WARNING]: Skipped 2025-11-01 14:24:41.455756 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-01 14:24:41.455764 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-11-01 14:24:41.455772 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-01 14:24:41.455779 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-11-01 14:24:41.455787 | orchestrator | [WARNING]: Skipped 2025-11-01 14:24:41.455795 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-01 14:24:41.455803 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-11-01 14:24:41.455811 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-01 14:24:41.455818 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-11-01 14:24:41.455826 | orchestrator | [WARNING]: Skipped 2025-11-01 14:24:41.455834 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-01 14:24:41.455842 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-11-01 14:24:41.455849 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-01 14:24:41.455862 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-11-01 14:24:41.455870 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-01 14:24:41.455878 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-11-01 14:24:41.455885 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-11-01 14:24:41.455893 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-11-01 14:24:41.455901 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-11-01 14:24:41.455913 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-11-01 14:24:41.455921 | orchestrator | 2025-11-01 14:24:41.455928 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-11-01 14:24:41.455936 | orchestrator | Saturday 01 November 2025 14:22:04 +0000 (0:00:02.171) 0:01:06.280 ***** 2025-11-01 14:24:41.455944 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-11-01 14:24:41.455952 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:24:41.455960 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-11-01 14:24:41.455968 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:24:41.455976 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-11-01 14:24:41.455984 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:24:41.455991 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-11-01 14:24:41.455999 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:24:41.456007 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-11-01 14:24:41.456015 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:24:41.456022 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-11-01 14:24:41.456030 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:24:41.456038 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-11-01 14:24:41.456046 | orchestrator | 2025-11-01 14:24:41.456053 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-11-01 14:24:41.456061 | orchestrator | Saturday 01 November 2025 14:22:22 +0000 (0:00:17.618) 0:01:23.899 ***** 2025-11-01 14:24:41.456069 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-11-01 14:24:41.456080 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:24:41.456088 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-11-01 14:24:41.456096 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:24:41.456104 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-11-01 14:24:41.456112 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:24:41.456119 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-11-01 14:24:41.456127 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:24:41.456135 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-11-01 14:24:41.456143 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:24:41.456151 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-11-01 14:24:41.456159 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:24:41.456166 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-11-01 14:24:41.456174 | orchestrator | 2025-11-01 14:24:41.456182 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-11-01 14:24:41.456189 | orchestrator | Saturday 01 November 2025 14:22:26 +0000 (0:00:04.250) 0:01:28.149 ***** 2025-11-01 14:24:41.456197 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-11-01 14:24:41.456210 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:24:41.456218 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-11-01 14:24:41.456226 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-11-01 14:24:41.456234 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:24:41.456242 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-11-01 14:24:41.456250 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:24:41.456258 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-11-01 14:24:41.456265 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:24:41.456273 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-11-01 14:24:41.456281 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:24:41.456289 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-11-01 14:24:41.456297 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:24:41.456305 | orchestrator | 2025-11-01 14:24:41.456312 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-11-01 14:24:41.456320 | orchestrator | Saturday 01 November 2025 14:22:29 +0000 (0:00:03.202) 0:01:31.352 ***** 2025-11-01 14:24:41.456328 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-01 14:24:41.456336 | orchestrator | 2025-11-01 14:24:41.456343 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-11-01 14:24:41.456351 | orchestrator | Saturday 01 November 2025 14:22:30 +0000 (0:00:00.904) 0:01:32.256 ***** 2025-11-01 14:24:41.456359 | orchestrator | skipping: [testbed-manager] 2025-11-01 14:24:41.456367 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:24:41.456378 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:24:41.456386 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:24:41.456394 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:24:41.456401 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:24:41.456409 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:24:41.456417 | orchestrator | 2025-11-01 14:24:41.456425 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-11-01 14:24:41.456432 | orchestrator | Saturday 01 November 2025 14:22:31 +0000 (0:00:00.813) 0:01:33.070 ***** 2025-11-01 14:24:41.456440 | orchestrator | skipping: [testbed-manager] 2025-11-01 14:24:41.456448 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:24:41.456456 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:24:41.456463 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:24:41.456471 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:24:41.456479 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:24:41.456499 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:24:41.456507 | orchestrator | 2025-11-01 14:24:41.456515 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-11-01 14:24:41.456523 | orchestrator | Saturday 01 November 2025 14:22:34 +0000 (0:00:02.681) 0:01:35.751 ***** 2025-11-01 14:24:41.456530 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-11-01 14:24:41.456538 | orchestrator | skipping: [testbed-manager] 2025-11-01 14:24:41.456546 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-11-01 14:24:41.456554 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:24:41.456562 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-11-01 14:24:41.456569 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:24:41.456582 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-11-01 14:24:41.456590 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:24:41.456635 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-11-01 14:24:41.456644 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:24:41.456652 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-11-01 14:24:41.456660 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:24:41.456668 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-11-01 14:24:41.456675 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:24:41.456683 | orchestrator | 2025-11-01 14:24:41.456691 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-11-01 14:24:41.456698 | orchestrator | Saturday 01 November 2025 14:22:36 +0000 (0:00:02.078) 0:01:37.830 ***** 2025-11-01 14:24:41.456706 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-11-01 14:24:41.456714 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:24:41.456722 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-11-01 14:24:41.456729 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:24:41.456737 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-11-01 14:24:41.456745 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:24:41.456753 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-11-01 14:24:41.456760 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-11-01 14:24:41.456768 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:24:41.456776 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-11-01 14:24:41.456783 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:24:41.456791 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-11-01 14:24:41.456799 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:24:41.456807 | orchestrator | 2025-11-01 14:24:41.456814 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-11-01 14:24:41.456822 | orchestrator | Saturday 01 November 2025 14:22:38 +0000 (0:00:01.855) 0:01:39.685 ***** 2025-11-01 14:24:41.456830 | orchestrator | [WARNING]: Skipped 2025-11-01 14:24:41.456837 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-11-01 14:24:41.456845 | orchestrator | due to this access issue: 2025-11-01 14:24:41.456853 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-11-01 14:24:41.456861 | orchestrator | not a directory 2025-11-01 14:24:41.456868 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-01 14:24:41.456876 | orchestrator | 2025-11-01 14:24:41.456884 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-11-01 14:24:41.456891 | orchestrator | Saturday 01 November 2025 14:22:39 +0000 (0:00:01.282) 0:01:40.967 ***** 2025-11-01 14:24:41.456899 | orchestrator | skipping: [testbed-manager] 2025-11-01 14:24:41.456907 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:24:41.456914 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:24:41.456922 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:24:41.456930 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:24:41.456937 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:24:41.456945 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:24:41.456952 | orchestrator | 2025-11-01 14:24:41.456960 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-11-01 14:24:41.456968 | orchestrator | Saturday 01 November 2025 14:22:40 +0000 (0:00:01.160) 0:01:42.128 ***** 2025-11-01 14:24:41.456981 | orchestrator | skipping: [testbed-manager] 2025-11-01 14:24:41.456993 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:24:41.457001 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:24:41.457009 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:24:41.457016 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:24:41.457024 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:24:41.457032 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:24:41.457040 | orchestrator | 2025-11-01 14:24:41.457047 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-11-01 14:24:41.457055 | orchestrator | Saturday 01 November 2025 14:22:42 +0000 (0:00:01.332) 0:01:43.460 ***** 2025-11-01 14:24:41.457063 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 14:24:41.457077 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 14:24:41.457086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 14:24:41.457094 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-11-01 14:24:41.457103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 14:24:41.457111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 14:24:41.457128 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 14:24:41.457136 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.457145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.457157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.457165 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.457173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.457181 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 14:24:41.457190 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.457210 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.457218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.457227 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.457239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.457247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.457256 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.457264 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.457277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.457289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.457298 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-11-01 14:24:41.457311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 14:24:41.457319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.457327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.457335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.457348 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 14:24:41.457356 | orchestrator | 2025-11-01 14:24:41.457364 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-11-01 14:24:41.457372 | orchestrator | Saturday 01 November 2025 14:22:47 +0000 (0:00:05.870) 0:01:49.330 ***** 2025-11-01 14:24:41.457380 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-11-01 14:24:41.457388 | orchestrator | skipping: [testbed-manager] 2025-11-01 14:24:41.457395 | orchestrator | 2025-11-01 14:24:41.457407 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-11-01 14:24:41.457415 | orchestrator | Saturday 01 November 2025 14:22:49 +0000 (0:00:01.286) 0:01:50.617 ***** 2025-11-01 14:24:41.457423 | orchestrator | 2025-11-01 14:24:41.457430 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-11-01 14:24:41.457438 | orchestrator | Saturday 01 November 2025 14:22:49 +0000 (0:00:00.069) 0:01:50.686 ***** 2025-11-01 14:24:41.457445 | orchestrator | 2025-11-01 14:24:41.457453 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-11-01 14:24:41.457461 | orchestrator | Saturday 01 November 2025 14:22:49 +0000 (0:00:00.066) 0:01:50.753 ***** 2025-11-01 14:24:41.457469 | orchestrator | 2025-11-01 14:24:41.457476 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-11-01 14:24:41.457522 | orchestrator | Saturday 01 November 2025 14:22:49 +0000 (0:00:00.069) 0:01:50.823 ***** 2025-11-01 14:24:41.457531 | orchestrator | 2025-11-01 14:24:41.457539 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-11-01 14:24:41.457547 | orchestrator | Saturday 01 November 2025 14:22:49 +0000 (0:00:00.254) 0:01:51.077 ***** 2025-11-01 14:24:41.457555 | orchestrator | 2025-11-01 14:24:41.457562 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-11-01 14:24:41.457570 | orchestrator | Saturday 01 November 2025 14:22:49 +0000 (0:00:00.071) 0:01:51.149 ***** 2025-11-01 14:24:41.457578 | orchestrator | 2025-11-01 14:24:41.457585 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-11-01 14:24:41.457593 | orchestrator | Saturday 01 November 2025 14:22:49 +0000 (0:00:00.074) 0:01:51.223 ***** 2025-11-01 14:24:41.457601 | orchestrator | 2025-11-01 14:24:41.457608 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-11-01 14:24:41.457616 | orchestrator | Saturday 01 November 2025 14:22:49 +0000 (0:00:00.105) 0:01:51.329 ***** 2025-11-01 14:24:41.457624 | orchestrator | changed: [testbed-manager] 2025-11-01 14:24:41.457632 | orchestrator | 2025-11-01 14:24:41.457639 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-11-01 14:24:41.457651 | orchestrator | Saturday 01 November 2025 14:23:09 +0000 (0:00:19.602) 0:02:10.932 ***** 2025-11-01 14:24:41.457659 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:24:41.457667 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:24:41.457675 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:24:41.457682 | orchestrator | changed: [testbed-manager] 2025-11-01 14:24:41.457690 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:24:41.457697 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:24:41.457705 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:24:41.457718 | orchestrator | 2025-11-01 14:24:41.457726 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-11-01 14:24:41.457734 | orchestrator | Saturday 01 November 2025 14:23:23 +0000 (0:00:13.591) 0:02:24.524 ***** 2025-11-01 14:24:41.457741 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:24:41.457749 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:24:41.457757 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:24:41.457764 | orchestrator | 2025-11-01 14:24:41.457772 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-11-01 14:24:41.457780 | orchestrator | Saturday 01 November 2025 14:23:28 +0000 (0:00:05.519) 0:02:30.043 ***** 2025-11-01 14:24:41.457787 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:24:41.457795 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:24:41.457803 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:24:41.457810 | orchestrator | 2025-11-01 14:24:41.457818 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-11-01 14:24:41.457825 | orchestrator | Saturday 01 November 2025 14:23:39 +0000 (0:00:11.226) 0:02:41.270 ***** 2025-11-01 14:24:41.457833 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:24:41.457841 | orchestrator | changed: [testbed-manager] 2025-11-01 14:24:41.457848 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:24:41.457856 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:24:41.457864 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:24:41.457871 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:24:41.457879 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:24:41.457886 | orchestrator | 2025-11-01 14:24:41.457894 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-11-01 14:24:41.457902 | orchestrator | Saturday 01 November 2025 14:23:56 +0000 (0:00:16.896) 0:02:58.166 ***** 2025-11-01 14:24:41.457909 | orchestrator | changed: [testbed-manager] 2025-11-01 14:24:41.457917 | orchestrator | 2025-11-01 14:24:41.457925 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-11-01 14:24:41.457932 | orchestrator | Saturday 01 November 2025 14:24:04 +0000 (0:00:07.473) 0:03:05.640 ***** 2025-11-01 14:24:41.457940 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:24:41.457948 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:24:41.457955 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:24:41.457963 | orchestrator | 2025-11-01 14:24:41.457971 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-11-01 14:24:41.457978 | orchestrator | Saturday 01 November 2025 14:24:17 +0000 (0:00:12.748) 0:03:18.388 ***** 2025-11-01 14:24:41.457986 | orchestrator | changed: [testbed-manager] 2025-11-01 14:24:41.457994 | orchestrator | 2025-11-01 14:24:41.458002 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-11-01 14:24:41.458009 | orchestrator | Saturday 01 November 2025 14:24:23 +0000 (0:00:06.213) 0:03:24.602 ***** 2025-11-01 14:24:41.458042 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:24:41.458050 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:24:41.458057 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:24:41.458063 | orchestrator | 2025-11-01 14:24:41.458070 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:24:41.458077 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-11-01 14:24:41.458084 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-11-01 14:24:41.458095 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-11-01 14:24:41.458102 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-11-01 14:24:41.458108 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-11-01 14:24:41.458119 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-11-01 14:24:41.458126 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-11-01 14:24:41.458133 | orchestrator | 2025-11-01 14:24:41.458139 | orchestrator | 2025-11-01 14:24:41.458146 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:24:41.458153 | orchestrator | Saturday 01 November 2025 14:24:38 +0000 (0:00:15.006) 0:03:39.608 ***** 2025-11-01 14:24:41.458159 | orchestrator | =============================================================================== 2025-11-01 14:24:41.458166 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 33.57s 2025-11-01 14:24:41.458172 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 19.60s 2025-11-01 14:24:41.458179 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 17.62s 2025-11-01 14:24:41.458185 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 16.90s 2025-11-01 14:24:41.458192 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 15.01s 2025-11-01 14:24:41.458202 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.59s 2025-11-01 14:24:41.458209 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 12.75s 2025-11-01 14:24:41.458215 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 11.23s 2025-11-01 14:24:41.458222 | orchestrator | prometheus : Copying over config.json files ----------------------------- 7.58s 2025-11-01 14:24:41.458229 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.47s 2025-11-01 14:24:41.458235 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.27s 2025-11-01 14:24:41.458241 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 6.21s 2025-11-01 14:24:41.458248 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.87s 2025-11-01 14:24:41.458254 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.52s 2025-11-01 14:24:41.458261 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.25s 2025-11-01 14:24:41.458268 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 3.32s 2025-11-01 14:24:41.458274 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.20s 2025-11-01 14:24:41.458281 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.13s 2025-11-01 14:24:41.458287 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 2.74s 2025-11-01 14:24:41.458294 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.68s 2025-11-01 14:24:41.458301 | orchestrator | 2025-11-01 14:24:41 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:24:41.458307 | orchestrator | 2025-11-01 14:24:41 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:24:41.458314 | orchestrator | 2025-11-01 14:24:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:24:44.479128 | orchestrator | 2025-11-01 14:24:44 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:24:44.479238 | orchestrator | 2025-11-01 14:24:44 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:24:44.479831 | orchestrator | 2025-11-01 14:24:44 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:24:44.480357 | orchestrator | 2025-11-01 14:24:44 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:24:44.480406 | orchestrator | 2025-11-01 14:24:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:24:47.511174 | orchestrator | 2025-11-01 14:24:47 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:24:47.513228 | orchestrator | 2025-11-01 14:24:47 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:24:47.515278 | orchestrator | 2025-11-01 14:24:47 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:24:47.517030 | orchestrator | 2025-11-01 14:24:47 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:24:47.517047 | orchestrator | 2025-11-01 14:24:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:24:50.548540 | orchestrator | 2025-11-01 14:24:50 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:24:50.548903 | orchestrator | 2025-11-01 14:24:50 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:24:50.550435 | orchestrator | 2025-11-01 14:24:50 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:24:50.552082 | orchestrator | 2025-11-01 14:24:50 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:24:50.552164 | orchestrator | 2025-11-01 14:24:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:24:53.601862 | orchestrator | 2025-11-01 14:24:53 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:24:53.605613 | orchestrator | 2025-11-01 14:24:53 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:24:53.609938 | orchestrator | 2025-11-01 14:24:53 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:24:53.612189 | orchestrator | 2025-11-01 14:24:53 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:24:53.612608 | orchestrator | 2025-11-01 14:24:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:24:56.661829 | orchestrator | 2025-11-01 14:24:56 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:24:56.662399 | orchestrator | 2025-11-01 14:24:56 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:24:56.664037 | orchestrator | 2025-11-01 14:24:56 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:24:56.665642 | orchestrator | 2025-11-01 14:24:56 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:24:56.665669 | orchestrator | 2025-11-01 14:24:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:24:59.715019 | orchestrator | 2025-11-01 14:24:59 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:24:59.716210 | orchestrator | 2025-11-01 14:24:59 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:24:59.719080 | orchestrator | 2025-11-01 14:24:59 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:24:59.720724 | orchestrator | 2025-11-01 14:24:59 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:24:59.720763 | orchestrator | 2025-11-01 14:24:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:25:02.770962 | orchestrator | 2025-11-01 14:25:02 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:25:02.772843 | orchestrator | 2025-11-01 14:25:02 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:25:02.775032 | orchestrator | 2025-11-01 14:25:02 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:25:02.776349 | orchestrator | 2025-11-01 14:25:02 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:25:02.776372 | orchestrator | 2025-11-01 14:25:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:25:05.818909 | orchestrator | 2025-11-01 14:25:05 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:25:05.822990 | orchestrator | 2025-11-01 14:25:05 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:25:05.824814 | orchestrator | 2025-11-01 14:25:05 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:25:05.827340 | orchestrator | 2025-11-01 14:25:05 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:25:05.827364 | orchestrator | 2025-11-01 14:25:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:25:08.878477 | orchestrator | 2025-11-01 14:25:08 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:25:08.881472 | orchestrator | 2025-11-01 14:25:08 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:25:08.886104 | orchestrator | 2025-11-01 14:25:08 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:25:08.888930 | orchestrator | 2025-11-01 14:25:08 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:25:08.889516 | orchestrator | 2025-11-01 14:25:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:25:11.931975 | orchestrator | 2025-11-01 14:25:11 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:25:11.932418 | orchestrator | 2025-11-01 14:25:11 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:25:11.933110 | orchestrator | 2025-11-01 14:25:11 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:25:11.933953 | orchestrator | 2025-11-01 14:25:11 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:25:11.933975 | orchestrator | 2025-11-01 14:25:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:25:14.975918 | orchestrator | 2025-11-01 14:25:14 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:25:14.980229 | orchestrator | 2025-11-01 14:25:14 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:25:14.982744 | orchestrator | 2025-11-01 14:25:14 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:25:14.984614 | orchestrator | 2025-11-01 14:25:14 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:25:14.984977 | orchestrator | 2025-11-01 14:25:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:25:18.037984 | orchestrator | 2025-11-01 14:25:18 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:25:18.038133 | orchestrator | 2025-11-01 14:25:18 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:25:18.038950 | orchestrator | 2025-11-01 14:25:18 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:25:18.040163 | orchestrator | 2025-11-01 14:25:18 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:25:18.040183 | orchestrator | 2025-11-01 14:25:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:25:21.084114 | orchestrator | 2025-11-01 14:25:21 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:25:21.084981 | orchestrator | 2025-11-01 14:25:21 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:25:21.086429 | orchestrator | 2025-11-01 14:25:21 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:25:21.087583 | orchestrator | 2025-11-01 14:25:21 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:25:21.087618 | orchestrator | 2025-11-01 14:25:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:25:24.139164 | orchestrator | 2025-11-01 14:25:24 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:25:24.142438 | orchestrator | 2025-11-01 14:25:24 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:25:24.144690 | orchestrator | 2025-11-01 14:25:24 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:25:24.147520 | orchestrator | 2025-11-01 14:25:24 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:25:24.147560 | orchestrator | 2025-11-01 14:25:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:25:27.182782 | orchestrator | 2025-11-01 14:25:27 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:25:27.184473 | orchestrator | 2025-11-01 14:25:27 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:25:27.185994 | orchestrator | 2025-11-01 14:25:27 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:25:27.187753 | orchestrator | 2025-11-01 14:25:27 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:25:27.187778 | orchestrator | 2025-11-01 14:25:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:25:30.228569 | orchestrator | 2025-11-01 14:25:30 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:25:30.230291 | orchestrator | 2025-11-01 14:25:30 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:25:30.232053 | orchestrator | 2025-11-01 14:25:30 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:25:30.234308 | orchestrator | 2025-11-01 14:25:30 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:25:30.234332 | orchestrator | 2025-11-01 14:25:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:25:33.284855 | orchestrator | 2025-11-01 14:25:33 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:25:33.287662 | orchestrator | 2025-11-01 14:25:33 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:25:33.291096 | orchestrator | 2025-11-01 14:25:33 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:25:33.293801 | orchestrator | 2025-11-01 14:25:33 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:25:33.293818 | orchestrator | 2025-11-01 14:25:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:25:36.338791 | orchestrator | 2025-11-01 14:25:36 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:25:36.341201 | orchestrator | 2025-11-01 14:25:36 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:25:36.343906 | orchestrator | 2025-11-01 14:25:36 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:25:36.345380 | orchestrator | 2025-11-01 14:25:36 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:25:36.345441 | orchestrator | 2025-11-01 14:25:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:25:39.390794 | orchestrator | 2025-11-01 14:25:39 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:25:39.393045 | orchestrator | 2025-11-01 14:25:39 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:25:39.394832 | orchestrator | 2025-11-01 14:25:39 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:25:39.396789 | orchestrator | 2025-11-01 14:25:39 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:25:39.396809 | orchestrator | 2025-11-01 14:25:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:25:42.442850 | orchestrator | 2025-11-01 14:25:42 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:25:42.443255 | orchestrator | 2025-11-01 14:25:42 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:25:42.444451 | orchestrator | 2025-11-01 14:25:42 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:25:42.445604 | orchestrator | 2025-11-01 14:25:42 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:25:42.445631 | orchestrator | 2025-11-01 14:25:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:25:45.494395 | orchestrator | 2025-11-01 14:25:45 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:25:45.496432 | orchestrator | 2025-11-01 14:25:45 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:25:45.497685 | orchestrator | 2025-11-01 14:25:45 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:25:45.499722 | orchestrator | 2025-11-01 14:25:45 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:25:45.499746 | orchestrator | 2025-11-01 14:25:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:25:48.548886 | orchestrator | 2025-11-01 14:25:48 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:25:48.550568 | orchestrator | 2025-11-01 14:25:48 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:25:48.552033 | orchestrator | 2025-11-01 14:25:48 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:25:48.553749 | orchestrator | 2025-11-01 14:25:48 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:25:48.553846 | orchestrator | 2025-11-01 14:25:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:25:51.588246 | orchestrator | 2025-11-01 14:25:51 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:25:51.589192 | orchestrator | 2025-11-01 14:25:51 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:25:51.590695 | orchestrator | 2025-11-01 14:25:51 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:25:51.592315 | orchestrator | 2025-11-01 14:25:51 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:25:51.592334 | orchestrator | 2025-11-01 14:25:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:25:54.638173 | orchestrator | 2025-11-01 14:25:54 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:25:54.638283 | orchestrator | 2025-11-01 14:25:54 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:25:54.638297 | orchestrator | 2025-11-01 14:25:54 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:25:54.638763 | orchestrator | 2025-11-01 14:25:54 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:25:54.638785 | orchestrator | 2025-11-01 14:25:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:25:57.683278 | orchestrator | 2025-11-01 14:25:57 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:25:57.686711 | orchestrator | 2025-11-01 14:25:57 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:25:57.689549 | orchestrator | 2025-11-01 14:25:57 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:25:57.692536 | orchestrator | 2025-11-01 14:25:57 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:25:57.692907 | orchestrator | 2025-11-01 14:25:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:26:00.739987 | orchestrator | 2025-11-01 14:26:00 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:26:00.741197 | orchestrator | 2025-11-01 14:26:00 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:26:00.742542 | orchestrator | 2025-11-01 14:26:00 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:26:00.743700 | orchestrator | 2025-11-01 14:26:00 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:26:00.743719 | orchestrator | 2025-11-01 14:26:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:26:03.800654 | orchestrator | 2025-11-01 14:26:03 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:26:03.800799 | orchestrator | 2025-11-01 14:26:03 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:26:03.801681 | orchestrator | 2025-11-01 14:26:03 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:26:03.802302 | orchestrator | 2025-11-01 14:26:03 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:26:03.802322 | orchestrator | 2025-11-01 14:26:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:26:06.843185 | orchestrator | 2025-11-01 14:26:06 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:26:06.844307 | orchestrator | 2025-11-01 14:26:06 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:26:06.845808 | orchestrator | 2025-11-01 14:26:06 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:26:06.846821 | orchestrator | 2025-11-01 14:26:06 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:26:06.846845 | orchestrator | 2025-11-01 14:26:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:26:09.891086 | orchestrator | 2025-11-01 14:26:09 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:26:09.892180 | orchestrator | 2025-11-01 14:26:09 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:26:09.896593 | orchestrator | 2025-11-01 14:26:09 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:26:09.897293 | orchestrator | 2025-11-01 14:26:09 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:26:09.897315 | orchestrator | 2025-11-01 14:26:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:26:12.930330 | orchestrator | 2025-11-01 14:26:12 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:26:12.932324 | orchestrator | 2025-11-01 14:26:12 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:26:12.935208 | orchestrator | 2025-11-01 14:26:12 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:26:12.937576 | orchestrator | 2025-11-01 14:26:12 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:26:12.937599 | orchestrator | 2025-11-01 14:26:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:26:15.982922 | orchestrator | 2025-11-01 14:26:15 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:26:15.983979 | orchestrator | 2025-11-01 14:26:15 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:26:15.985820 | orchestrator | 2025-11-01 14:26:15 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:26:15.987991 | orchestrator | 2025-11-01 14:26:15 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:26:15.988017 | orchestrator | 2025-11-01 14:26:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:26:19.047183 | orchestrator | 2025-11-01 14:26:19 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:26:19.048170 | orchestrator | 2025-11-01 14:26:19 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state STARTED 2025-11-01 14:26:19.049158 | orchestrator | 2025-11-01 14:26:19 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:26:19.050299 | orchestrator | 2025-11-01 14:26:19 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:26:19.050323 | orchestrator | 2025-11-01 14:26:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:26:22.089792 | orchestrator | 2025-11-01 14:26:22 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:26:22.098192 | orchestrator | 2025-11-01 14:26:22.098228 | orchestrator | 2025-11-01 14:26:22.098242 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 14:26:22.098253 | orchestrator | 2025-11-01 14:26:22.098264 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 14:26:22.098275 | orchestrator | Saturday 01 November 2025 14:22:59 +0000 (0:00:00.271) 0:00:00.271 ***** 2025-11-01 14:26:22.098286 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:26:22.098298 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:26:22.098309 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:26:22.098319 | orchestrator | 2025-11-01 14:26:22.098330 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 14:26:22.098376 | orchestrator | Saturday 01 November 2025 14:22:59 +0000 (0:00:00.339) 0:00:00.611 ***** 2025-11-01 14:26:22.098388 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-11-01 14:26:22.098399 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-11-01 14:26:22.098409 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-11-01 14:26:22.098420 | orchestrator | 2025-11-01 14:26:22.098430 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-11-01 14:26:22.098441 | orchestrator | 2025-11-01 14:26:22.098452 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-11-01 14:26:22.098462 | orchestrator | Saturday 01 November 2025 14:23:00 +0000 (0:00:00.646) 0:00:01.257 ***** 2025-11-01 14:26:22.098473 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:26:22.098508 | orchestrator | 2025-11-01 14:26:22.098518 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-11-01 14:26:22.098528 | orchestrator | Saturday 01 November 2025 14:23:01 +0000 (0:00:00.685) 0:00:01.943 ***** 2025-11-01 14:26:22.098559 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-11-01 14:26:22.098569 | orchestrator | 2025-11-01 14:26:22.098579 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-11-01 14:26:22.098589 | orchestrator | Saturday 01 November 2025 14:23:05 +0000 (0:00:03.967) 0:00:05.911 ***** 2025-11-01 14:26:22.098599 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-11-01 14:26:22.098608 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-11-01 14:26:22.098618 | orchestrator | 2025-11-01 14:26:22.098627 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-11-01 14:26:22.098637 | orchestrator | Saturday 01 November 2025 14:23:12 +0000 (0:00:07.622) 0:00:13.533 ***** 2025-11-01 14:26:22.098646 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-01 14:26:22.098657 | orchestrator | 2025-11-01 14:26:22.098666 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-11-01 14:26:22.098676 | orchestrator | Saturday 01 November 2025 14:23:16 +0000 (0:00:03.978) 0:00:17.512 ***** 2025-11-01 14:26:22.098685 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-01 14:26:22.098695 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-11-01 14:26:22.098705 | orchestrator | 2025-11-01 14:26:22.098714 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-11-01 14:26:22.098724 | orchestrator | Saturday 01 November 2025 14:23:21 +0000 (0:00:04.329) 0:00:21.841 ***** 2025-11-01 14:26:22.098733 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-01 14:26:22.098742 | orchestrator | 2025-11-01 14:26:22.098752 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-11-01 14:26:22.098761 | orchestrator | Saturday 01 November 2025 14:23:24 +0000 (0:00:03.876) 0:00:25.718 ***** 2025-11-01 14:26:22.098771 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-11-01 14:26:22.098780 | orchestrator | 2025-11-01 14:26:22.098791 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-11-01 14:26:22.098802 | orchestrator | Saturday 01 November 2025 14:23:29 +0000 (0:00:04.178) 0:00:29.897 ***** 2025-11-01 14:26:22.098846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-01 14:26:22.098864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-01 14:26:22.098890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-01 14:26:22.098902 | orchestrator | 2025-11-01 14:26:22.098914 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-11-01 14:26:22.098925 | orchestrator | Saturday 01 November 2025 14:23:33 +0000 (0:00:04.812) 0:00:34.709 ***** 2025-11-01 14:26:22.098937 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:26:22.098948 | orchestrator | 2025-11-01 14:26:22.098964 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-11-01 14:26:22.098975 | orchestrator | Saturday 01 November 2025 14:23:34 +0000 (0:00:00.734) 0:00:35.444 ***** 2025-11-01 14:26:22.098985 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:26:22.099002 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:26:22.099011 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:26:22.099021 | orchestrator | 2025-11-01 14:26:22.099030 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-11-01 14:26:22.099040 | orchestrator | Saturday 01 November 2025 14:23:39 +0000 (0:00:04.584) 0:00:40.028 ***** 2025-11-01 14:26:22.099049 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-11-01 14:26:22.099059 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-11-01 14:26:22.099068 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-11-01 14:26:22.099078 | orchestrator | 2025-11-01 14:26:22.099087 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-11-01 14:26:22.099097 | orchestrator | Saturday 01 November 2025 14:23:41 +0000 (0:00:02.220) 0:00:42.255 ***** 2025-11-01 14:26:22.099106 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-11-01 14:26:22.099116 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-11-01 14:26:22.099125 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-11-01 14:26:22.099135 | orchestrator | 2025-11-01 14:26:22.099144 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-11-01 14:26:22.099154 | orchestrator | Saturday 01 November 2025 14:23:43 +0000 (0:00:02.352) 0:00:44.609 ***** 2025-11-01 14:26:22.099163 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:26:22.099173 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:26:22.099182 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:26:22.099191 | orchestrator | 2025-11-01 14:26:22.099201 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-11-01 14:26:22.099210 | orchestrator | Saturday 01 November 2025 14:23:44 +0000 (0:00:00.982) 0:00:45.592 ***** 2025-11-01 14:26:22.099220 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:26:22.099229 | orchestrator | 2025-11-01 14:26:22.099239 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-11-01 14:26:22.099248 | orchestrator | Saturday 01 November 2025 14:23:45 +0000 (0:00:01.143) 0:00:46.736 ***** 2025-11-01 14:26:22.099258 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:26:22.099267 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:26:22.099277 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:26:22.099286 | orchestrator | 2025-11-01 14:26:22.099296 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-11-01 14:26:22.099305 | orchestrator | Saturday 01 November 2025 14:23:46 +0000 (0:00:00.598) 0:00:47.335 ***** 2025-11-01 14:26:22.099314 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:26:22.099324 | orchestrator | 2025-11-01 14:26:22.099333 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-11-01 14:26:22.099343 | orchestrator | Saturday 01 November 2025 14:23:48 +0000 (0:00:01.556) 0:00:48.891 ***** 2025-11-01 14:26:22.099362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-01 14:26:22.099381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-01 14:26:22.099396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-01 14:26:22.099414 | orchestrator | 2025-11-01 14:26:22.099423 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-11-01 14:26:22.099433 | orchestrator | Saturday 01 November 2025 14:23:55 +0000 (0:00:07.377) 0:00:56.269 ***** 2025-11-01 14:26:22.099452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-01 14:26:22.099463 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:26:22.099500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-01 14:26:22.099520 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:26:22.099542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-01 14:26:22.099553 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:26:22.099563 | orchestrator | 2025-11-01 14:26:22.099573 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-11-01 14:26:22.099582 | orchestrator | Saturday 01 November 2025 14:23:59 +0000 (0:00:04.122) 0:01:00.391 ***** 2025-11-01 14:26:22.099593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-01 14:26:22.099603 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:26:22.099624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-01 14:26:22.099643 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:26:22.099684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-01 14:26:22.099695 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:26:22.099705 | orchestrator | 2025-11-01 14:26:22.099715 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-11-01 14:26:22.099724 | orchestrator | Saturday 01 November 2025 14:24:06 +0000 (0:00:06.991) 0:01:07.383 ***** 2025-11-01 14:26:22.099734 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:26:22.099744 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:26:22.099753 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:26:22.099763 | orchestrator | 2025-11-01 14:26:22.099773 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-11-01 14:26:22.099789 | orchestrator | Saturday 01 November 2025 14:24:12 +0000 (0:00:06.236) 0:01:13.620 ***** 2025-11-01 14:26:22.099815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-01 14:26:22.099827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-01 14:26:22.099843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-01 14:26:22.099860 | orchestrator | 2025-11-01 14:26:22.099870 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-11-01 14:26:22.099880 | orchestrator | Saturday 01 November 2025 14:24:18 +0000 (0:00:05.507) 0:01:19.127 ***** 2025-11-01 14:26:22.099889 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:26:22.099899 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:26:22.099909 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:26:22.099918 | orchestrator | 2025-11-01 14:26:22.099928 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-11-01 14:26:22.099937 | orchestrator | Saturday 01 November 2025 14:24:33 +0000 (0:00:14.994) 0:01:34.121 ***** 2025-11-01 14:26:22.099947 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:26:22.099956 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:26:22.099966 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:26:22.099976 | orchestrator | 2025-11-01 14:26:22.099985 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-11-01 14:26:22.100000 | orchestrator | Saturday 01 No2025-11-01 14:26:22 | INFO  | Task d1a0469c-c7e1-4fe7-b054-3e203e9a6f20 is in state SUCCESS 2025-11-01 14:26:22.100011 | orchestrator | 2025-11-01 14:26:22 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:26:22.100140 | orchestrator | vember 2025 14:24:39 +0000 (0:00:05.745) 0:01:39.866 ***** 2025-11-01 14:26:22.100153 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:26:22.100162 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:26:22.100172 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:26:22.100181 | orchestrator | 2025-11-01 14:26:22.100191 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-11-01 14:26:22.100201 | orchestrator | Saturday 01 November 2025 14:24:43 +0000 (0:00:04.912) 0:01:44.779 ***** 2025-11-01 14:26:22.100210 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:26:22.100219 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:26:22.100229 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:26:22.100238 | orchestrator | 2025-11-01 14:26:22.100248 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-11-01 14:26:22.100257 | orchestrator | Saturday 01 November 2025 14:24:47 +0000 (0:00:03.832) 0:01:48.612 ***** 2025-11-01 14:26:22.100266 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:26:22.100276 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:26:22.100285 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:26:22.100295 | orchestrator | 2025-11-01 14:26:22.100304 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-11-01 14:26:22.100314 | orchestrator | Saturday 01 November 2025 14:24:51 +0000 (0:00:03.964) 0:01:52.577 ***** 2025-11-01 14:26:22.100323 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:26:22.100339 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:26:22.100349 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:26:22.100358 | orchestrator | 2025-11-01 14:26:22.100368 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-11-01 14:26:22.100377 | orchestrator | Saturday 01 November 2025 14:24:52 +0000 (0:00:00.338) 0:01:52.915 ***** 2025-11-01 14:26:22.100387 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-11-01 14:26:22.100396 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:26:22.100406 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-11-01 14:26:22.100416 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:26:22.100425 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-11-01 14:26:22.100435 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:26:22.100444 | orchestrator | 2025-11-01 14:26:22.100454 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-11-01 14:26:22.100463 | orchestrator | Saturday 01 November 2025 14:24:55 +0000 (0:00:03.550) 0:01:56.465 ***** 2025-11-01 14:26:22.100523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-01 14:26:22.100545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-01 14:26:22.100563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-01 14:26:22.100574 | orchestrator | 2025-11-01 14:26:22.100588 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-11-01 14:26:22.100598 | orchestrator | Saturday 01 November 2025 14:24:59 +0000 (0:00:03.865) 0:02:00.331 ***** 2025-11-01 14:26:22.100608 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:26:22.100617 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:26:22.100627 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:26:22.100636 | orchestrator | 2025-11-01 14:26:22.100646 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-11-01 14:26:22.100656 | orchestrator | Saturday 01 November 2025 14:24:59 +0000 (0:00:00.330) 0:02:00.661 ***** 2025-11-01 14:26:22.100665 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:26:22.100675 | orchestrator | 2025-11-01 14:26:22.100684 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-11-01 14:26:22.100694 | orchestrator | Saturday 01 November 2025 14:25:02 +0000 (0:00:02.296) 0:02:02.957 ***** 2025-11-01 14:26:22.100704 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:26:22.100713 | orchestrator | 2025-11-01 14:26:22.100723 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-11-01 14:26:22.100732 | orchestrator | Saturday 01 November 2025 14:25:04 +0000 (0:00:02.716) 0:02:05.674 ***** 2025-11-01 14:26:22.100742 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:26:22.100751 | orchestrator | 2025-11-01 14:26:22.100761 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-11-01 14:26:22.100772 | orchestrator | Saturday 01 November 2025 14:25:07 +0000 (0:00:02.415) 0:02:08.090 ***** 2025-11-01 14:26:22.100783 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:26:22.100799 | orchestrator | 2025-11-01 14:26:22.100810 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-11-01 14:26:22.100822 | orchestrator | Saturday 01 November 2025 14:25:42 +0000 (0:00:35.129) 0:02:43.219 ***** 2025-11-01 14:26:22.100832 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:26:22.100843 | orchestrator | 2025-11-01 14:26:22.100858 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-11-01 14:26:22.100870 | orchestrator | Saturday 01 November 2025 14:25:44 +0000 (0:00:02.455) 0:02:45.675 ***** 2025-11-01 14:26:22.100881 | orchestrator | 2025-11-01 14:26:22.100892 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-11-01 14:26:22.100903 | orchestrator | Saturday 01 November 2025 14:25:44 +0000 (0:00:00.072) 0:02:45.747 ***** 2025-11-01 14:26:22.100913 | orchestrator | 2025-11-01 14:26:22.100924 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-11-01 14:26:22.100935 | orchestrator | Saturday 01 November 2025 14:25:45 +0000 (0:00:00.086) 0:02:45.834 ***** 2025-11-01 14:26:22.100946 | orchestrator | 2025-11-01 14:26:22.100957 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-11-01 14:26:22.100967 | orchestrator | Saturday 01 November 2025 14:25:45 +0000 (0:00:00.068) 0:02:45.902 ***** 2025-11-01 14:26:22.100978 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:26:22.100989 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:26:22.100999 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:26:22.101010 | orchestrator | 2025-11-01 14:26:22.101021 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:26:22.101032 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-11-01 14:26:22.101044 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-11-01 14:26:22.101055 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-11-01 14:26:22.101066 | orchestrator | 2025-11-01 14:26:22.101077 | orchestrator | 2025-11-01 14:26:22.101087 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:26:22.101098 | orchestrator | Saturday 01 November 2025 14:26:19 +0000 (0:00:34.401) 0:03:20.304 ***** 2025-11-01 14:26:22.101109 | orchestrator | =============================================================================== 2025-11-01 14:26:22.101120 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 35.13s 2025-11-01 14:26:22.101129 | orchestrator | glance : Restart glance-api container ---------------------------------- 34.40s 2025-11-01 14:26:22.101139 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 14.99s 2025-11-01 14:26:22.101148 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.62s 2025-11-01 14:26:22.101158 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 7.38s 2025-11-01 14:26:22.101167 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 6.99s 2025-11-01 14:26:22.101177 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 6.24s 2025-11-01 14:26:22.101186 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.75s 2025-11-01 14:26:22.101196 | orchestrator | glance : Copying over config.json files for services -------------------- 5.50s 2025-11-01 14:26:22.101205 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.91s 2025-11-01 14:26:22.101215 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.81s 2025-11-01 14:26:22.101224 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.58s 2025-11-01 14:26:22.101234 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.33s 2025-11-01 14:26:22.101243 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.18s 2025-11-01 14:26:22.101258 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.12s 2025-11-01 14:26:22.101268 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.98s 2025-11-01 14:26:22.101282 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.97s 2025-11-01 14:26:22.101292 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.96s 2025-11-01 14:26:22.101302 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.88s 2025-11-01 14:26:22.101311 | orchestrator | glance : Check glance containers ---------------------------------------- 3.87s 2025-11-01 14:26:22.101321 | orchestrator | 2025-11-01 14:26:22 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:26:22.102112 | orchestrator | 2025-11-01 14:26:22 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:26:22.102129 | orchestrator | 2025-11-01 14:26:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:26:25.146558 | orchestrator | 2025-11-01 14:26:25 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:26:25.147137 | orchestrator | 2025-11-01 14:26:25 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:26:25.148261 | orchestrator | 2025-11-01 14:26:25 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:26:25.150778 | orchestrator | 2025-11-01 14:26:25 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:26:25.150800 | orchestrator | 2025-11-01 14:26:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:26:28.188799 | orchestrator | 2025-11-01 14:26:28 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:26:28.189958 | orchestrator | 2025-11-01 14:26:28 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:26:28.191557 | orchestrator | 2025-11-01 14:26:28 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:26:28.192915 | orchestrator | 2025-11-01 14:26:28 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:26:28.192937 | orchestrator | 2025-11-01 14:26:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:26:31.234393 | orchestrator | 2025-11-01 14:26:31 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:26:31.235064 | orchestrator | 2025-11-01 14:26:31 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:26:31.236383 | orchestrator | 2025-11-01 14:26:31 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:26:31.237790 | orchestrator | 2025-11-01 14:26:31 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:26:31.237813 | orchestrator | 2025-11-01 14:26:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:26:34.276366 | orchestrator | 2025-11-01 14:26:34 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:26:34.278152 | orchestrator | 2025-11-01 14:26:34 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:26:34.280755 | orchestrator | 2025-11-01 14:26:34 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:26:34.282936 | orchestrator | 2025-11-01 14:26:34 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:26:34.283019 | orchestrator | 2025-11-01 14:26:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:26:37.331227 | orchestrator | 2025-11-01 14:26:37 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:26:37.331891 | orchestrator | 2025-11-01 14:26:37 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:26:37.333123 | orchestrator | 2025-11-01 14:26:37 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:26:37.334693 | orchestrator | 2025-11-01 14:26:37 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:26:37.334711 | orchestrator | 2025-11-01 14:26:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:26:40.387085 | orchestrator | 2025-11-01 14:26:40 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:26:40.389816 | orchestrator | 2025-11-01 14:26:40 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:26:40.393188 | orchestrator | 2025-11-01 14:26:40 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:26:40.401422 | orchestrator | 2025-11-01 14:26:40 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:26:40.401455 | orchestrator | 2025-11-01 14:26:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:26:43.448031 | orchestrator | 2025-11-01 14:26:43 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:26:43.448768 | orchestrator | 2025-11-01 14:26:43 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:26:43.450374 | orchestrator | 2025-11-01 14:26:43 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:26:43.451470 | orchestrator | 2025-11-01 14:26:43 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:26:43.451541 | orchestrator | 2025-11-01 14:26:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:26:46.488763 | orchestrator | 2025-11-01 14:26:46 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:26:46.490190 | orchestrator | 2025-11-01 14:26:46 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:26:46.491700 | orchestrator | 2025-11-01 14:26:46 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:26:46.492861 | orchestrator | 2025-11-01 14:26:46 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:26:46.492882 | orchestrator | 2025-11-01 14:26:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:26:49.528950 | orchestrator | 2025-11-01 14:26:49 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state STARTED 2025-11-01 14:26:49.530257 | orchestrator | 2025-11-01 14:26:49 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:26:49.530761 | orchestrator | 2025-11-01 14:26:49 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:26:49.531381 | orchestrator | 2025-11-01 14:26:49 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:26:49.531410 | orchestrator | 2025-11-01 14:26:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:26:52.568090 | orchestrator | 2025-11-01 14:26:52 | INFO  | Task fd4f2922-15e0-4936-9f30-05f9dfc1d3cc is in state SUCCESS 2025-11-01 14:26:52.569843 | orchestrator | 2025-11-01 14:26:52.569925 | orchestrator | 2025-11-01 14:26:52.569938 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 14:26:52.569949 | orchestrator | 2025-11-01 14:26:52.569958 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 14:26:52.569967 | orchestrator | Saturday 01 November 2025 14:23:00 +0000 (0:00:00.380) 0:00:00.380 ***** 2025-11-01 14:26:52.569998 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:26:52.570009 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:26:52.570118 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:26:52.570133 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:26:52.570141 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:26:52.570150 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:26:52.570159 | orchestrator | 2025-11-01 14:26:52.570168 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 14:26:52.570177 | orchestrator | Saturday 01 November 2025 14:23:00 +0000 (0:00:00.901) 0:00:01.281 ***** 2025-11-01 14:26:52.570185 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-11-01 14:26:52.570194 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-11-01 14:26:52.570203 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-11-01 14:26:52.570212 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-11-01 14:26:52.570220 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-11-01 14:26:52.570229 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-11-01 14:26:52.570238 | orchestrator | 2025-11-01 14:26:52.570246 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-11-01 14:26:52.570255 | orchestrator | 2025-11-01 14:26:52.570264 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-11-01 14:26:52.570272 | orchestrator | Saturday 01 November 2025 14:23:01 +0000 (0:00:00.757) 0:00:02.039 ***** 2025-11-01 14:26:52.570281 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:26:52.570292 | orchestrator | 2025-11-01 14:26:52.570300 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-11-01 14:26:52.570309 | orchestrator | Saturday 01 November 2025 14:23:02 +0000 (0:00:01.248) 0:00:03.287 ***** 2025-11-01 14:26:52.570318 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-11-01 14:26:52.570326 | orchestrator | 2025-11-01 14:26:52.570421 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-11-01 14:26:52.570433 | orchestrator | Saturday 01 November 2025 14:23:06 +0000 (0:00:03.776) 0:00:07.063 ***** 2025-11-01 14:26:52.570444 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-11-01 14:26:52.570454 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-11-01 14:26:52.570463 | orchestrator | 2025-11-01 14:26:52.570518 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-11-01 14:26:52.570530 | orchestrator | Saturday 01 November 2025 14:23:14 +0000 (0:00:07.996) 0:00:15.060 ***** 2025-11-01 14:26:52.570540 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-01 14:26:52.570550 | orchestrator | 2025-11-01 14:26:52.570572 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-11-01 14:26:52.570582 | orchestrator | Saturday 01 November 2025 14:23:18 +0000 (0:00:04.059) 0:00:19.119 ***** 2025-11-01 14:26:52.570591 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-01 14:26:52.570601 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-11-01 14:26:52.570610 | orchestrator | 2025-11-01 14:26:52.570620 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-11-01 14:26:52.570629 | orchestrator | Saturday 01 November 2025 14:23:23 +0000 (0:00:04.283) 0:00:23.403 ***** 2025-11-01 14:26:52.570639 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-01 14:26:52.570648 | orchestrator | 2025-11-01 14:26:52.570657 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-11-01 14:26:52.570667 | orchestrator | Saturday 01 November 2025 14:23:26 +0000 (0:00:03.733) 0:00:27.137 ***** 2025-11-01 14:26:52.570677 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-11-01 14:26:52.570686 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-11-01 14:26:52.570703 | orchestrator | 2025-11-01 14:26:52.570713 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-11-01 14:26:52.570722 | orchestrator | Saturday 01 November 2025 14:23:35 +0000 (0:00:08.877) 0:00:36.015 ***** 2025-11-01 14:26:52.570735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 14:26:52.570767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 14:26:52.570779 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.570793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 14:26:52.570804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.570820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.570836 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.570846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.570855 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.570869 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.570884 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.570894 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.570903 | orchestrator | 2025-11-01 14:26:52.570916 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-11-01 14:26:52.570925 | orchestrator | Saturday 01 November 2025 14:23:38 +0000 (0:00:02.845) 0:00:38.860 ***** 2025-11-01 14:26:52.570934 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:26:52.570942 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:26:52.570951 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:26:52.570960 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:26:52.570968 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:26:52.570977 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:26:52.570985 | orchestrator | 2025-11-01 14:26:52.570994 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-11-01 14:26:52.571002 | orchestrator | Saturday 01 November 2025 14:23:39 +0000 (0:00:00.662) 0:00:39.523 ***** 2025-11-01 14:26:52.571011 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:26:52.571019 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:26:52.571028 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:26:52.571036 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:26:52.571045 | orchestrator | 2025-11-01 14:26:52.571054 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-11-01 14:26:52.571062 | orchestrator | Saturday 01 November 2025 14:23:40 +0000 (0:00:01.566) 0:00:41.089 ***** 2025-11-01 14:26:52.571071 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-11-01 14:26:52.571079 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-11-01 14:26:52.571088 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-11-01 14:26:52.571096 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-11-01 14:26:52.571105 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-11-01 14:26:52.571114 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-11-01 14:26:52.571122 | orchestrator | 2025-11-01 14:26:52.571131 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-11-01 14:26:52.571151 | orchestrator | Saturday 01 November 2025 14:23:45 +0000 (0:00:04.465) 0:00:45.554 ***** 2025-11-01 14:26:52.571165 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-11-01 14:26:52.571182 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-11-01 14:26:52.571192 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-11-01 14:26:52.571208 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-11-01 14:26:52.571217 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-11-01 14:26:52.571226 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-11-01 14:26:52.571245 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-11-01 14:26:52.571255 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-11-01 14:26:52.571269 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-11-01 14:26:52.571279 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-11-01 14:26:52.571298 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-11-01 14:26:52.571308 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-11-01 14:26:52.571317 | orchestrator | 2025-11-01 14:26:52.571326 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-11-01 14:26:52.571334 | orchestrator | Saturday 01 November 2025 14:23:51 +0000 (0:00:06.200) 0:00:51.755 ***** 2025-11-01 14:26:52.571343 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-11-01 14:26:52.571362 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-11-01 14:26:52.571371 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-11-01 14:26:52.571380 | orchestrator | 2025-11-01 14:26:52.571388 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-11-01 14:26:52.571397 | orchestrator | Saturday 01 November 2025 14:23:54 +0000 (0:00:02.981) 0:00:54.737 ***** 2025-11-01 14:26:52.571406 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-11-01 14:26:52.571414 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-11-01 14:26:52.571423 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-11-01 14:26:52.571431 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-11-01 14:26:52.571440 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-11-01 14:26:52.571533 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-11-01 14:26:52.571544 | orchestrator | 2025-11-01 14:26:52.571553 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-11-01 14:26:52.571561 | orchestrator | Saturday 01 November 2025 14:23:58 +0000 (0:00:03.773) 0:00:58.510 ***** 2025-11-01 14:26:52.571570 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-11-01 14:26:52.571579 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-11-01 14:26:52.571587 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-11-01 14:26:52.571596 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-11-01 14:26:52.571605 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-11-01 14:26:52.571623 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-11-01 14:26:52.571632 | orchestrator | 2025-11-01 14:26:52.571641 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-11-01 14:26:52.571656 | orchestrator | Saturday 01 November 2025 14:23:59 +0000 (0:00:01.213) 0:00:59.724 ***** 2025-11-01 14:26:52.571665 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:26:52.571674 | orchestrator | 2025-11-01 14:26:52.571683 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-11-01 14:26:52.571691 | orchestrator | Saturday 01 November 2025 14:23:59 +0000 (0:00:00.169) 0:00:59.893 ***** 2025-11-01 14:26:52.571700 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:26:52.571708 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:26:52.571717 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:26:52.571725 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:26:52.571734 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:26:52.571743 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:26:52.571751 | orchestrator | 2025-11-01 14:26:52.571760 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-11-01 14:26:52.571768 | orchestrator | Saturday 01 November 2025 14:24:00 +0000 (0:00:00.998) 0:01:00.891 ***** 2025-11-01 14:26:52.571778 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:26:52.571788 | orchestrator | 2025-11-01 14:26:52.571797 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-11-01 14:26:52.571805 | orchestrator | Saturday 01 November 2025 14:24:02 +0000 (0:00:01.908) 0:01:02.800 ***** 2025-11-01 14:26:52.571824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 14:26:52.571834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 14:26:52.571850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 14:26:52.571865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.571874 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.571883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.571896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.571906 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.572343 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.572370 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.572380 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.572394 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.572404 | orchestrator | 2025-11-01 14:26:52.572413 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-11-01 14:26:52.572422 | orchestrator | Saturday 01 November 2025 14:24:07 +0000 (0:00:04.862) 0:01:07.663 ***** 2025-11-01 14:26:52.572431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-01 14:26:52.572446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:26:52.572462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-01 14:26:52.572526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:26:52.572538 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:26:52.572547 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:26:52.572561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-01 14:26:52.572570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:26:52.572579 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:26:52.572588 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-01 14:26:52.572611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-01 14:26:52.572621 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:26:52.572630 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-01 14:26:52.572639 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-01 14:26:52.572648 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:26:52.572661 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-01 14:26:52.572671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-01 14:26:52.572686 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:26:52.572694 | orchestrator | 2025-11-01 14:26:52.572703 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-11-01 14:26:52.572712 | orchestrator | Saturday 01 November 2025 14:24:09 +0000 (0:00:02.553) 0:01:10.216 ***** 2025-11-01 14:26:52.572725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-01 14:26:52.572735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:26:52.572744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-01 14:26:52.572757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:26:52.572766 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:26:52.572775 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:26:52.572784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-01 14:26:52.572804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:26:52.572814 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:26:52.572823 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-01 14:26:52.572832 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-01 14:26:52.572841 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:26:52.572854 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-01 14:26:52.572863 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-01 14:26:52.572877 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:26:52.572891 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-01 14:26:52.572901 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-01 14:26:52.572910 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:26:52.572919 | orchestrator | 2025-11-01 14:26:52.572927 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-11-01 14:26:52.572936 | orchestrator | Saturday 01 November 2025 14:24:12 +0000 (0:00:02.517) 0:01:12.734 ***** 2025-11-01 14:26:52.572945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 14:26:52.572960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 14:26:52.572979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 14:26:52.572995 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.573005 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.573017 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.573031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.573047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.573058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.573071 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.573080 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.573089 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.573098 | orchestrator | 2025-11-01 14:26:52.573107 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-11-01 14:26:52.573116 | orchestrator | Saturday 01 November 2025 14:24:15 +0000 (0:00:03.490) 0:01:16.225 ***** 2025-11-01 14:26:52.573134 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-11-01 14:26:52.573143 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:26:52.573152 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-11-01 14:26:52.573161 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:26:52.573170 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-11-01 14:26:52.573178 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-11-01 14:26:52.573187 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-11-01 14:26:52.573195 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:26:52.573204 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-11-01 14:26:52.573213 | orchestrator | 2025-11-01 14:26:52.573222 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-11-01 14:26:52.573230 | orchestrator | Saturday 01 November 2025 14:24:19 +0000 (0:00:03.330) 0:01:19.555 ***** 2025-11-01 14:26:52.573239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 14:26:52.573253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 14:26:52.573263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 14:26:52.573276 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.573290 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.573303 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.573312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.573320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.573328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.573345 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.573354 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.573362 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.573370 | orchestrator | 2025-11-01 14:26:52.573378 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-11-01 14:26:52.573386 | orchestrator | Saturday 01 November 2025 14:24:33 +0000 (0:00:14.667) 0:01:34.223 ***** 2025-11-01 14:26:52.573398 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:26:52.573406 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:26:52.573414 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:26:52.573422 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:26:52.573429 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:26:52.573437 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:26:52.573445 | orchestrator | 2025-11-01 14:26:52.573452 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-11-01 14:26:52.573460 | orchestrator | Saturday 01 November 2025 14:24:36 +0000 (0:00:02.787) 0:01:37.010 ***** 2025-11-01 14:26:52.573468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-01 14:26:52.573495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:26:52.573508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-01 14:26:52.573517 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:26:52.573525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:26:52.573533 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:26:52.573545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-01 14:26:52.573554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:26:52.573562 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:26:52.573576 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-01 14:26:52.573588 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-01 14:26:52.573596 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-01 14:26:52.573605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-01 14:26:52.573613 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:26:52.573621 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:26:52.573633 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-01 14:26:52.573646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-01 14:26:52.573654 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:26:52.573662 | orchestrator | 2025-11-01 14:26:52.573670 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-11-01 14:26:52.573678 | orchestrator | Saturday 01 November 2025 14:24:38 +0000 (0:00:02.080) 0:01:39.091 ***** 2025-11-01 14:26:52.573686 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:26:52.573693 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:26:52.573701 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:26:52.573709 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:26:52.573716 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:26:52.573724 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:26:52.573732 | orchestrator | 2025-11-01 14:26:52.573740 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-11-01 14:26:52.573748 | orchestrator | Saturday 01 November 2025 14:24:39 +0000 (0:00:00.689) 0:01:39.780 ***** 2025-11-01 14:26:52.573759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 14:26:52.573768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 14:26:52.573780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 14:26:52.573796 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.573805 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.573813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.573849 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.573863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.573877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.573885 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.573893 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.573905 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-01 14:26:52.573914 | orchestrator | 2025-11-01 14:26:52.573922 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-11-01 14:26:52.573930 | orchestrator | Saturday 01 November 2025 14:24:43 +0000 (0:00:03.852) 0:01:43.633 ***** 2025-11-01 14:26:52.573937 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:26:52.573945 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:26:52.573953 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:26:52.573961 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:26:52.573968 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:26:52.573976 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:26:52.573983 | orchestrator | 2025-11-01 14:26:52.573991 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-11-01 14:26:52.573998 | orchestrator | Saturday 01 November 2025 14:24:43 +0000 (0:00:00.710) 0:01:44.343 ***** 2025-11-01 14:26:52.574006 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:26:52.574014 | orchestrator | 2025-11-01 14:26:52.574076 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-11-01 14:26:52.574085 | orchestrator | Saturday 01 November 2025 14:24:46 +0000 (0:00:02.635) 0:01:46.979 ***** 2025-11-01 14:26:52.574093 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:26:52.574100 | orchestrator | 2025-11-01 14:26:52.574108 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-11-01 14:26:52.574116 | orchestrator | Saturday 01 November 2025 14:24:49 +0000 (0:00:02.770) 0:01:49.749 ***** 2025-11-01 14:26:52.574124 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:26:52.574132 | orchestrator | 2025-11-01 14:26:52.574139 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-11-01 14:26:52.574147 | orchestrator | Saturday 01 November 2025 14:25:10 +0000 (0:00:21.479) 0:02:11.228 ***** 2025-11-01 14:26:52.574155 | orchestrator | 2025-11-01 14:26:52.574168 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-11-01 14:26:52.574176 | orchestrator | Saturday 01 November 2025 14:25:10 +0000 (0:00:00.096) 0:02:11.324 ***** 2025-11-01 14:26:52.574184 | orchestrator | 2025-11-01 14:26:52.574192 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-11-01 14:26:52.574199 | orchestrator | Saturday 01 November 2025 14:25:11 +0000 (0:00:00.073) 0:02:11.398 ***** 2025-11-01 14:26:52.574207 | orchestrator | 2025-11-01 14:26:52.574215 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-11-01 14:26:52.574223 | orchestrator | Saturday 01 November 2025 14:25:11 +0000 (0:00:00.076) 0:02:11.475 ***** 2025-11-01 14:26:52.574230 | orchestrator | 2025-11-01 14:26:52.574238 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-11-01 14:26:52.574246 | orchestrator | Saturday 01 November 2025 14:25:11 +0000 (0:00:00.091) 0:02:11.566 ***** 2025-11-01 14:26:52.574254 | orchestrator | 2025-11-01 14:26:52.574261 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-11-01 14:26:52.574269 | orchestrator | Saturday 01 November 2025 14:25:11 +0000 (0:00:00.070) 0:02:11.637 ***** 2025-11-01 14:26:52.574277 | orchestrator | 2025-11-01 14:26:52.574285 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-11-01 14:26:52.574292 | orchestrator | Saturday 01 November 2025 14:25:11 +0000 (0:00:00.072) 0:02:11.709 ***** 2025-11-01 14:26:52.574300 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:26:52.574308 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:26:52.574315 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:26:52.574323 | orchestrator | 2025-11-01 14:26:52.574331 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-11-01 14:26:52.574339 | orchestrator | Saturday 01 November 2025 14:25:41 +0000 (0:00:30.373) 0:02:42.082 ***** 2025-11-01 14:26:52.574346 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:26:52.574354 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:26:52.574362 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:26:52.574369 | orchestrator | 2025-11-01 14:26:52.574377 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-11-01 14:26:52.574385 | orchestrator | Saturday 01 November 2025 14:25:52 +0000 (0:00:10.757) 0:02:52.840 ***** 2025-11-01 14:26:52.574393 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:26:52.574400 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:26:52.574408 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:26:52.574416 | orchestrator | 2025-11-01 14:26:52.574423 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-11-01 14:26:52.574431 | orchestrator | Saturday 01 November 2025 14:26:38 +0000 (0:00:46.287) 0:03:39.128 ***** 2025-11-01 14:26:52.574439 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:26:52.574447 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:26:52.574454 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:26:52.574462 | orchestrator | 2025-11-01 14:26:52.574470 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-11-01 14:26:52.574520 | orchestrator | Saturday 01 November 2025 14:26:49 +0000 (0:00:10.922) 0:03:50.050 ***** 2025-11-01 14:26:52.574538 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:26:52.574546 | orchestrator | 2025-11-01 14:26:52.574554 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:26:52.574566 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-11-01 14:26:52.574576 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-11-01 14:26:52.574584 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-11-01 14:26:52.574592 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-11-01 14:26:52.574600 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-11-01 14:26:52.574607 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-11-01 14:26:52.574615 | orchestrator | 2025-11-01 14:26:52.574623 | orchestrator | 2025-11-01 14:26:52.574631 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:26:52.574639 | orchestrator | Saturday 01 November 2025 14:26:50 +0000 (0:00:00.729) 0:03:50.780 ***** 2025-11-01 14:26:52.574647 | orchestrator | =============================================================================== 2025-11-01 14:26:52.574654 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 46.29s 2025-11-01 14:26:52.574662 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 30.37s 2025-11-01 14:26:52.574670 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 21.48s 2025-11-01 14:26:52.574678 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 14.67s 2025-11-01 14:26:52.574686 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.92s 2025-11-01 14:26:52.574693 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.76s 2025-11-01 14:26:52.574701 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.88s 2025-11-01 14:26:52.574709 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 8.00s 2025-11-01 14:26:52.574722 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 6.20s 2025-11-01 14:26:52.574730 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.86s 2025-11-01 14:26:52.574738 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 4.47s 2025-11-01 14:26:52.574746 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.28s 2025-11-01 14:26:52.574753 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 4.06s 2025-11-01 14:26:52.574761 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.85s 2025-11-01 14:26:52.574769 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.78s 2025-11-01 14:26:52.574777 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.77s 2025-11-01 14:26:52.574784 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.73s 2025-11-01 14:26:52.574792 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.49s 2025-11-01 14:26:52.574800 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 3.33s 2025-11-01 14:26:52.574808 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 2.98s 2025-11-01 14:26:52.574815 | orchestrator | 2025-11-01 14:26:52 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:26:52.576816 | orchestrator | 2025-11-01 14:26:52 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:26:52.578620 | orchestrator | 2025-11-01 14:26:52 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:26:52.580125 | orchestrator | 2025-11-01 14:26:52 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:26:52.580383 | orchestrator | 2025-11-01 14:26:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:26:55.620034 | orchestrator | 2025-11-01 14:26:55 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:26:55.621000 | orchestrator | 2025-11-01 14:26:55 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:26:55.623687 | orchestrator | 2025-11-01 14:26:55 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:26:55.625123 | orchestrator | 2025-11-01 14:26:55 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:26:55.625145 | orchestrator | 2025-11-01 14:26:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:26:58.684329 | orchestrator | 2025-11-01 14:26:58 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:26:58.686586 | orchestrator | 2025-11-01 14:26:58 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:26:58.689578 | orchestrator | 2025-11-01 14:26:58 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:26:58.691824 | orchestrator | 2025-11-01 14:26:58 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:26:58.692001 | orchestrator | 2025-11-01 14:26:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:27:01.727174 | orchestrator | 2025-11-01 14:27:01 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:27:01.727277 | orchestrator | 2025-11-01 14:27:01 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:27:01.728041 | orchestrator | 2025-11-01 14:27:01 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:27:01.728760 | orchestrator | 2025-11-01 14:27:01 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:27:01.728779 | orchestrator | 2025-11-01 14:27:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:27:04.775335 | orchestrator | 2025-11-01 14:27:04 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:27:04.776739 | orchestrator | 2025-11-01 14:27:04 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:27:04.780332 | orchestrator | 2025-11-01 14:27:04 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:27:04.782207 | orchestrator | 2025-11-01 14:27:04 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:27:04.782658 | orchestrator | 2025-11-01 14:27:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:27:07.820849 | orchestrator | 2025-11-01 14:27:07 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:27:07.827063 | orchestrator | 2025-11-01 14:27:07 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:27:07.828861 | orchestrator | 2025-11-01 14:27:07 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:27:07.830756 | orchestrator | 2025-11-01 14:27:07 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:27:07.831145 | orchestrator | 2025-11-01 14:27:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:27:10.864313 | orchestrator | 2025-11-01 14:27:10 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:27:10.865554 | orchestrator | 2025-11-01 14:27:10 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:27:10.867991 | orchestrator | 2025-11-01 14:27:10 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:27:10.869870 | orchestrator | 2025-11-01 14:27:10 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:27:10.869898 | orchestrator | 2025-11-01 14:27:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:27:13.913078 | orchestrator | 2025-11-01 14:27:13 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:27:13.918650 | orchestrator | 2025-11-01 14:27:13 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:27:13.920931 | orchestrator | 2025-11-01 14:27:13 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:27:13.924154 | orchestrator | 2025-11-01 14:27:13 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:27:13.924174 | orchestrator | 2025-11-01 14:27:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:27:16.974371 | orchestrator | 2025-11-01 14:27:16 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:27:16.976652 | orchestrator | 2025-11-01 14:27:16 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:27:16.979996 | orchestrator | 2025-11-01 14:27:16 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:27:16.982881 | orchestrator | 2025-11-01 14:27:16 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:27:16.983161 | orchestrator | 2025-11-01 14:27:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:27:20.019754 | orchestrator | 2025-11-01 14:27:20 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:27:20.021419 | orchestrator | 2025-11-01 14:27:20 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:27:20.023170 | orchestrator | 2025-11-01 14:27:20 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:27:20.025130 | orchestrator | 2025-11-01 14:27:20 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:27:20.025151 | orchestrator | 2025-11-01 14:27:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:27:23.083031 | orchestrator | 2025-11-01 14:27:23 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:27:23.083930 | orchestrator | 2025-11-01 14:27:23 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:27:23.086731 | orchestrator | 2025-11-01 14:27:23 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:27:23.088583 | orchestrator | 2025-11-01 14:27:23 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:27:23.088604 | orchestrator | 2025-11-01 14:27:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:27:26.127299 | orchestrator | 2025-11-01 14:27:26 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:27:26.129122 | orchestrator | 2025-11-01 14:27:26 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:27:26.130208 | orchestrator | 2025-11-01 14:27:26 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:27:26.131839 | orchestrator | 2025-11-01 14:27:26 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:27:26.131863 | orchestrator | 2025-11-01 14:27:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:27:29.176845 | orchestrator | 2025-11-01 14:27:29 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:27:29.180036 | orchestrator | 2025-11-01 14:27:29 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:27:29.183160 | orchestrator | 2025-11-01 14:27:29 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:27:29.186104 | orchestrator | 2025-11-01 14:27:29 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:27:29.186129 | orchestrator | 2025-11-01 14:27:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:27:32.231972 | orchestrator | 2025-11-01 14:27:32 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:27:32.234867 | orchestrator | 2025-11-01 14:27:32 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:27:32.237393 | orchestrator | 2025-11-01 14:27:32 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:27:32.238846 | orchestrator | 2025-11-01 14:27:32 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:27:32.239091 | orchestrator | 2025-11-01 14:27:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:27:35.291878 | orchestrator | 2025-11-01 14:27:35 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:27:35.291960 | orchestrator | 2025-11-01 14:27:35 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:27:35.293346 | orchestrator | 2025-11-01 14:27:35 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:27:35.295632 | orchestrator | 2025-11-01 14:27:35 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:27:35.295649 | orchestrator | 2025-11-01 14:27:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:27:38.339704 | orchestrator | 2025-11-01 14:27:38 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:27:38.340714 | orchestrator | 2025-11-01 14:27:38 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:27:38.341832 | orchestrator | 2025-11-01 14:27:38 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:27:38.343118 | orchestrator | 2025-11-01 14:27:38 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:27:38.343151 | orchestrator | 2025-11-01 14:27:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:27:41.386720 | orchestrator | 2025-11-01 14:27:41 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:27:41.388227 | orchestrator | 2025-11-01 14:27:41 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:27:41.390341 | orchestrator | 2025-11-01 14:27:41 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:27:41.392272 | orchestrator | 2025-11-01 14:27:41 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:27:41.392294 | orchestrator | 2025-11-01 14:27:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:27:44.434683 | orchestrator | 2025-11-01 14:27:44 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:27:44.435925 | orchestrator | 2025-11-01 14:27:44 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:27:44.438438 | orchestrator | 2025-11-01 14:27:44 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:27:44.440152 | orchestrator | 2025-11-01 14:27:44 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:27:44.440175 | orchestrator | 2025-11-01 14:27:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:27:47.480819 | orchestrator | 2025-11-01 14:27:47 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:27:47.482110 | orchestrator | 2025-11-01 14:27:47 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:27:47.483516 | orchestrator | 2025-11-01 14:27:47 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:27:47.484722 | orchestrator | 2025-11-01 14:27:47 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:27:47.484744 | orchestrator | 2025-11-01 14:27:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:27:50.525701 | orchestrator | 2025-11-01 14:27:50 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:27:50.526989 | orchestrator | 2025-11-01 14:27:50 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:27:50.528758 | orchestrator | 2025-11-01 14:27:50 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:27:50.531024 | orchestrator | 2025-11-01 14:27:50 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:27:50.531054 | orchestrator | 2025-11-01 14:27:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:27:53.575115 | orchestrator | 2025-11-01 14:27:53 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:27:53.575743 | orchestrator | 2025-11-01 14:27:53 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:27:53.576599 | orchestrator | 2025-11-01 14:27:53 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:27:53.577318 | orchestrator | 2025-11-01 14:27:53 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:27:53.577339 | orchestrator | 2025-11-01 14:27:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:27:56.627097 | orchestrator | 2025-11-01 14:27:56 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:27:56.628279 | orchestrator | 2025-11-01 14:27:56 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:27:56.630348 | orchestrator | 2025-11-01 14:27:56 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:27:56.634290 | orchestrator | 2025-11-01 14:27:56 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:27:56.634325 | orchestrator | 2025-11-01 14:27:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:27:59.684983 | orchestrator | 2025-11-01 14:27:59 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:27:59.686995 | orchestrator | 2025-11-01 14:27:59 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:27:59.689082 | orchestrator | 2025-11-01 14:27:59 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:27:59.690894 | orchestrator | 2025-11-01 14:27:59 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:27:59.690921 | orchestrator | 2025-11-01 14:27:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:28:02.733846 | orchestrator | 2025-11-01 14:28:02 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:28:02.735055 | orchestrator | 2025-11-01 14:28:02 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:28:02.736415 | orchestrator | 2025-11-01 14:28:02 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:28:02.738192 | orchestrator | 2025-11-01 14:28:02 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:28:02.738219 | orchestrator | 2025-11-01 14:28:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:28:05.783704 | orchestrator | 2025-11-01 14:28:05 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:28:05.785321 | orchestrator | 2025-11-01 14:28:05 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:28:05.787685 | orchestrator | 2025-11-01 14:28:05 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:28:05.790115 | orchestrator | 2025-11-01 14:28:05 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:28:05.790176 | orchestrator | 2025-11-01 14:28:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:28:08.832789 | orchestrator | 2025-11-01 14:28:08 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:28:08.834688 | orchestrator | 2025-11-01 14:28:08 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:28:08.836692 | orchestrator | 2025-11-01 14:28:08 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:28:08.838534 | orchestrator | 2025-11-01 14:28:08 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:28:08.838558 | orchestrator | 2025-11-01 14:28:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:28:11.887517 | orchestrator | 2025-11-01 14:28:11 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:28:11.889350 | orchestrator | 2025-11-01 14:28:11 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:28:11.891365 | orchestrator | 2025-11-01 14:28:11 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:28:11.893031 | orchestrator | 2025-11-01 14:28:11 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:28:11.893053 | orchestrator | 2025-11-01 14:28:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:28:14.940688 | orchestrator | 2025-11-01 14:28:14 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:28:14.942275 | orchestrator | 2025-11-01 14:28:14 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:28:14.944234 | orchestrator | 2025-11-01 14:28:14 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:28:14.946758 | orchestrator | 2025-11-01 14:28:14 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:28:14.946781 | orchestrator | 2025-11-01 14:28:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:28:17.990175 | orchestrator | 2025-11-01 14:28:17 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:28:17.991699 | orchestrator | 2025-11-01 14:28:17 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:28:17.995458 | orchestrator | 2025-11-01 14:28:17 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:28:17.998443 | orchestrator | 2025-11-01 14:28:17 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:28:17.998610 | orchestrator | 2025-11-01 14:28:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:28:21.046899 | orchestrator | 2025-11-01 14:28:21 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:28:21.051030 | orchestrator | 2025-11-01 14:28:21 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:28:21.054100 | orchestrator | 2025-11-01 14:28:21 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:28:21.056978 | orchestrator | 2025-11-01 14:28:21 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:28:21.057001 | orchestrator | 2025-11-01 14:28:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:28:24.097947 | orchestrator | 2025-11-01 14:28:24 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:28:24.099290 | orchestrator | 2025-11-01 14:28:24 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:28:24.101112 | orchestrator | 2025-11-01 14:28:24 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:28:24.103150 | orchestrator | 2025-11-01 14:28:24 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:28:24.103181 | orchestrator | 2025-11-01 14:28:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:28:27.141159 | orchestrator | 2025-11-01 14:28:27 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:28:27.142293 | orchestrator | 2025-11-01 14:28:27 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:28:27.144141 | orchestrator | 2025-11-01 14:28:27 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:28:27.146239 | orchestrator | 2025-11-01 14:28:27 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:28:27.146266 | orchestrator | 2025-11-01 14:28:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:28:30.211377 | orchestrator | 2025-11-01 14:28:30 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:28:30.212125 | orchestrator | 2025-11-01 14:28:30 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:28:30.213263 | orchestrator | 2025-11-01 14:28:30 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:28:30.215004 | orchestrator | 2025-11-01 14:28:30 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:28:30.215033 | orchestrator | 2025-11-01 14:28:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:28:33.262515 | orchestrator | 2025-11-01 14:28:33 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:28:33.264212 | orchestrator | 2025-11-01 14:28:33 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:28:33.266126 | orchestrator | 2025-11-01 14:28:33 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:28:33.268014 | orchestrator | 2025-11-01 14:28:33 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:28:33.268078 | orchestrator | 2025-11-01 14:28:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:28:36.307805 | orchestrator | 2025-11-01 14:28:36 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:28:36.310545 | orchestrator | 2025-11-01 14:28:36 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:28:36.311202 | orchestrator | 2025-11-01 14:28:36 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:28:36.312806 | orchestrator | 2025-11-01 14:28:36 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:28:36.312931 | orchestrator | 2025-11-01 14:28:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:28:39.344683 | orchestrator | 2025-11-01 14:28:39 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:28:39.345573 | orchestrator | 2025-11-01 14:28:39 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:28:39.346351 | orchestrator | 2025-11-01 14:28:39 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:28:39.355283 | orchestrator | 2025-11-01 14:28:39 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:28:39.357040 | orchestrator | 2025-11-01 14:28:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:28:42.406722 | orchestrator | 2025-11-01 14:28:42 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:28:42.410651 | orchestrator | 2025-11-01 14:28:42 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:28:42.413694 | orchestrator | 2025-11-01 14:28:42 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:28:42.416452 | orchestrator | 2025-11-01 14:28:42 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:28:42.416882 | orchestrator | 2025-11-01 14:28:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:28:45.456674 | orchestrator | 2025-11-01 14:28:45 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:28:45.459086 | orchestrator | 2025-11-01 14:28:45 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:28:45.461510 | orchestrator | 2025-11-01 14:28:45 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:28:45.463368 | orchestrator | 2025-11-01 14:28:45 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:28:45.463967 | orchestrator | 2025-11-01 14:28:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:28:48.516933 | orchestrator | 2025-11-01 14:28:48 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:28:48.519021 | orchestrator | 2025-11-01 14:28:48 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:28:48.521261 | orchestrator | 2025-11-01 14:28:48 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:28:48.522975 | orchestrator | 2025-11-01 14:28:48 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:28:48.522986 | orchestrator | 2025-11-01 14:28:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:28:51.567761 | orchestrator | 2025-11-01 14:28:51 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:28:51.568711 | orchestrator | 2025-11-01 14:28:51 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:28:51.570729 | orchestrator | 2025-11-01 14:28:51 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:28:51.571681 | orchestrator | 2025-11-01 14:28:51 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:28:51.571698 | orchestrator | 2025-11-01 14:28:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:28:54.613960 | orchestrator | 2025-11-01 14:28:54 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:28:54.615273 | orchestrator | 2025-11-01 14:28:54 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:28:54.616812 | orchestrator | 2025-11-01 14:28:54 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:28:54.618368 | orchestrator | 2025-11-01 14:28:54 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:28:54.618400 | orchestrator | 2025-11-01 14:28:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:28:57.665130 | orchestrator | 2025-11-01 14:28:57 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:28:57.666186 | orchestrator | 2025-11-01 14:28:57 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:28:57.668750 | orchestrator | 2025-11-01 14:28:57 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:28:57.670288 | orchestrator | 2025-11-01 14:28:57 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:28:57.670316 | orchestrator | 2025-11-01 14:28:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:29:00.722395 | orchestrator | 2025-11-01 14:29:00 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:29:00.723714 | orchestrator | 2025-11-01 14:29:00 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:29:00.726832 | orchestrator | 2025-11-01 14:29:00 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:29:00.729665 | orchestrator | 2025-11-01 14:29:00 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:29:00.729689 | orchestrator | 2025-11-01 14:29:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:29:03.781440 | orchestrator | 2025-11-01 14:29:03 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:29:03.783358 | orchestrator | 2025-11-01 14:29:03 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:29:03.785666 | orchestrator | 2025-11-01 14:29:03 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:29:03.787813 | orchestrator | 2025-11-01 14:29:03 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:29:03.787859 | orchestrator | 2025-11-01 14:29:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:29:06.834227 | orchestrator | 2025-11-01 14:29:06 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:29:06.836229 | orchestrator | 2025-11-01 14:29:06 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:29:06.838348 | orchestrator | 2025-11-01 14:29:06 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:29:06.840153 | orchestrator | 2025-11-01 14:29:06 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:29:06.840174 | orchestrator | 2025-11-01 14:29:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:29:09.888176 | orchestrator | 2025-11-01 14:29:09 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:29:09.891668 | orchestrator | 2025-11-01 14:29:09 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:29:09.894616 | orchestrator | 2025-11-01 14:29:09 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:29:09.895872 | orchestrator | 2025-11-01 14:29:09 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:29:09.895963 | orchestrator | 2025-11-01 14:29:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:29:12.949023 | orchestrator | 2025-11-01 14:29:12 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:29:12.951909 | orchestrator | 2025-11-01 14:29:12 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:29:12.954980 | orchestrator | 2025-11-01 14:29:12 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:29:12.957046 | orchestrator | 2025-11-01 14:29:12 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:29:12.957067 | orchestrator | 2025-11-01 14:29:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:29:16.004809 | orchestrator | 2025-11-01 14:29:16 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:29:16.007133 | orchestrator | 2025-11-01 14:29:16 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state STARTED 2025-11-01 14:29:16.011583 | orchestrator | 2025-11-01 14:29:16 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:29:16.015004 | orchestrator | 2025-11-01 14:29:16 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:29:16.015407 | orchestrator | 2025-11-01 14:29:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:29:19.066783 | orchestrator | 2025-11-01 14:29:19 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state STARTED 2025-11-01 14:29:19.067663 | orchestrator | 2025-11-01 14:29:19 | INFO  | Task 37fdf65e-81e4-43cb-a766-b1f5068b4b94 is in state SUCCESS 2025-11-01 14:29:19.068876 | orchestrator | 2025-11-01 14:29:19 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:29:19.070327 | orchestrator | 2025-11-01 14:29:19 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:29:19.070789 | orchestrator | 2025-11-01 14:29:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:29:22.119733 | orchestrator | 2025-11-01 14:29:22 | INFO  | Task cb9dce4c-1595-45cd-91f5-60aea77b7d14 is in state SUCCESS 2025-11-01 14:29:22.119904 | orchestrator | 2025-11-01 14:29:22.119923 | orchestrator | 2025-11-01 14:29:22.119936 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 14:29:22.119947 | orchestrator | 2025-11-01 14:29:22.119958 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 14:29:22.119970 | orchestrator | Saturday 01 November 2025 14:26:24 +0000 (0:00:00.198) 0:00:00.198 ***** 2025-11-01 14:29:22.119980 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:29:22.119992 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:29:22.120003 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:29:22.120014 | orchestrator | 2025-11-01 14:29:22.120024 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 14:29:22.120035 | orchestrator | Saturday 01 November 2025 14:26:24 +0000 (0:00:00.304) 0:00:00.502 ***** 2025-11-01 14:29:22.120046 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-11-01 14:29:22.120057 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-11-01 14:29:22.120067 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-11-01 14:29:22.120078 | orchestrator | 2025-11-01 14:29:22.120088 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-11-01 14:29:22.120099 | orchestrator | 2025-11-01 14:29:22.120109 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-11-01 14:29:22.120120 | orchestrator | Saturday 01 November 2025 14:26:25 +0000 (0:00:00.823) 0:00:01.326 ***** 2025-11-01 14:29:22.120155 | orchestrator | 2025-11-01 14:29:22.120166 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-11-01 14:29:22.120177 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:29:22.120187 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:29:22.120198 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:29:22.120209 | orchestrator | 2025-11-01 14:29:22.120220 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:29:22.120231 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:29:22.120243 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:29:22.120254 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:29:22.120265 | orchestrator | 2025-11-01 14:29:22.120275 | orchestrator | 2025-11-01 14:29:22.120298 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:29:22.120309 | orchestrator | Saturday 01 November 2025 14:29:17 +0000 (0:02:51.856) 0:02:53.183 ***** 2025-11-01 14:29:22.120320 | orchestrator | =============================================================================== 2025-11-01 14:29:22.120330 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 171.86s 2025-11-01 14:29:22.120341 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.82s 2025-11-01 14:29:22.120351 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-11-01 14:29:22.120362 | orchestrator | 2025-11-01 14:29:22.122266 | orchestrator | 2025-11-01 14:29:22.122296 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 14:29:22.122341 | orchestrator | 2025-11-01 14:29:22.122354 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 14:29:22.122365 | orchestrator | Saturday 01 November 2025 14:26:55 +0000 (0:00:00.402) 0:00:00.402 ***** 2025-11-01 14:29:22.122444 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:29:22.122457 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:29:22.122498 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:29:22.122509 | orchestrator | 2025-11-01 14:29:22.122520 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 14:29:22.122531 | orchestrator | Saturday 01 November 2025 14:26:56 +0000 (0:00:00.343) 0:00:00.745 ***** 2025-11-01 14:29:22.122541 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-11-01 14:29:22.122552 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-11-01 14:29:22.122563 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-11-01 14:29:22.122573 | orchestrator | 2025-11-01 14:29:22.122612 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-11-01 14:29:22.122623 | orchestrator | 2025-11-01 14:29:22.122647 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-11-01 14:29:22.122659 | orchestrator | Saturday 01 November 2025 14:26:56 +0000 (0:00:00.455) 0:00:01.200 ***** 2025-11-01 14:29:22.122671 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:29:22.122682 | orchestrator | 2025-11-01 14:29:22.122693 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-11-01 14:29:22.122703 | orchestrator | Saturday 01 November 2025 14:26:57 +0000 (0:00:00.730) 0:00:01.931 ***** 2025-11-01 14:29:22.122718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 14:29:22.122748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 14:29:22.122788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 14:29:22.122800 | orchestrator | 2025-11-01 14:29:22.122811 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-11-01 14:29:22.122822 | orchestrator | Saturday 01 November 2025 14:26:58 +0000 (0:00:00.795) 0:00:02.727 ***** 2025-11-01 14:29:22.122833 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-11-01 14:29:22.122845 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-11-01 14:29:22.122856 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-01 14:29:22.122866 | orchestrator | 2025-11-01 14:29:22.122878 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-11-01 14:29:22.122898 | orchestrator | Saturday 01 November 2025 14:26:59 +0000 (0:00:00.937) 0:00:03.665 ***** 2025-11-01 14:29:22.122911 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:29:22.122923 | orchestrator | 2025-11-01 14:29:22.122935 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-11-01 14:29:22.122947 | orchestrator | Saturday 01 November 2025 14:26:59 +0000 (0:00:00.860) 0:00:04.526 ***** 2025-11-01 14:29:22.122972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 14:29:22.122986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 14:29:22.123029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 14:29:22.123042 | orchestrator | 2025-11-01 14:29:22.123054 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-11-01 14:29:22.123066 | orchestrator | Saturday 01 November 2025 14:27:01 +0000 (0:00:01.560) 0:00:06.086 ***** 2025-11-01 14:29:22.123079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-01 14:29:22.123092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-01 14:29:22.123105 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:29:22.123117 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:29:22.123143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-01 14:29:22.123156 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:29:22.123168 | orchestrator | 2025-11-01 14:29:22.123180 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-11-01 14:29:22.123192 | orchestrator | Saturday 01 November 2025 14:27:01 +0000 (0:00:00.406) 0:00:06.493 ***** 2025-11-01 14:29:22.123205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-01 14:29:22.123225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-01 14:29:22.123237 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:29:22.123248 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:29:22.123259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-01 14:29:22.123270 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:29:22.123281 | orchestrator | 2025-11-01 14:29:22.123292 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-11-01 14:29:22.123302 | orchestrator | Saturday 01 November 2025 14:27:02 +0000 (0:00:00.867) 0:00:07.360 ***** 2025-11-01 14:29:22.123313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 14:29:22.123330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 14:29:22.123349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 14:29:22.123367 | orchestrator | 2025-11-01 14:29:22.123378 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-11-01 14:29:22.123388 | orchestrator | Saturday 01 November 2025 14:27:04 +0000 (0:00:01.303) 0:00:08.663 ***** 2025-11-01 14:29:22.123400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 14:29:22.123411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 14:29:22.123423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 14:29:22.123434 | orchestrator | 2025-11-01 14:29:22.123445 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-11-01 14:29:22.123456 | orchestrator | Saturday 01 November 2025 14:27:05 +0000 (0:00:01.449) 0:00:10.113 ***** 2025-11-01 14:29:22.123467 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:29:22.123497 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:29:22.123508 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:29:22.123519 | orchestrator | 2025-11-01 14:29:22.123529 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-11-01 14:29:22.123540 | orchestrator | Saturday 01 November 2025 14:27:06 +0000 (0:00:00.538) 0:00:10.652 ***** 2025-11-01 14:29:22.123551 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-11-01 14:29:22.123561 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-11-01 14:29:22.123572 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-11-01 14:29:22.123582 | orchestrator | 2025-11-01 14:29:22.123593 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-11-01 14:29:22.123603 | orchestrator | Saturday 01 November 2025 14:27:07 +0000 (0:00:01.402) 0:00:12.054 ***** 2025-11-01 14:29:22.123619 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-11-01 14:29:22.123630 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-11-01 14:29:22.123641 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-11-01 14:29:22.123659 | orchestrator | 2025-11-01 14:29:22.123670 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-11-01 14:29:22.123681 | orchestrator | Saturday 01 November 2025 14:27:08 +0000 (0:00:01.272) 0:00:13.327 ***** 2025-11-01 14:29:22.123697 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-01 14:29:22.123708 | orchestrator | 2025-11-01 14:29:22.123719 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-11-01 14:29:22.123730 | orchestrator | Saturday 01 November 2025 14:27:09 +0000 (0:00:00.812) 0:00:14.139 ***** 2025-11-01 14:29:22.123740 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-11-01 14:29:22.123751 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-11-01 14:29:22.123761 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:29:22.123772 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:29:22.123783 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:29:22.123794 | orchestrator | 2025-11-01 14:29:22.123804 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-11-01 14:29:22.123815 | orchestrator | Saturday 01 November 2025 14:27:10 +0000 (0:00:00.747) 0:00:14.887 ***** 2025-11-01 14:29:22.123826 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:29:22.123836 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:29:22.123847 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:29:22.123857 | orchestrator | 2025-11-01 14:29:22.123868 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-11-01 14:29:22.123879 | orchestrator | Saturday 01 November 2025 14:27:10 +0000 (0:00:00.640) 0:00:15.528 ***** 2025-11-01 14:29:22.123891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1080856, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3088691, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.123904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1080856, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3088691, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.123915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1080856, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3088691, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.123927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1080905, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3257003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.123955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1080905, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3257003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.123967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1080905, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3257003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.123978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1080862, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3127198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.123989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1080862, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3127198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1080862, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3127198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1080909, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.32772, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1080909, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.32772, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1080909, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.32772, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1080878, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3192496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1080878, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3192496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1080878, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3192496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1080897, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3233793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1080897, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3233793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1080897, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3233793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1080854, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3081903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1080854, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3081903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1080854, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3081903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1080860, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3107197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1080860, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3107197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1080860, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3107197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1080863, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3127198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1080863, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3127198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1080863, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3127198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1080886, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.321057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1080886, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.321057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1080886, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.321057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1080903, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3249147, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1080903, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3249147, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1080903, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3249147, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1080861, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.31172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1080861, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.31172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1080861, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.31172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1080894, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3226948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1080894, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3226948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1080894, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3226948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1080882, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3205242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1080882, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3205242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1080882, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3205242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1080874, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3186467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1080874, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3186467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1080874, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3186467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1080870, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3162656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1080870, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3162656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1080870, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3162656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1080888, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3219712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1080888, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3219712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.124999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1080888, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3219712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1080864, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3152497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1080864, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3152497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1080864, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3152497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1080899, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3244467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1080899, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3244467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1080899, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3244467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1081001, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3730774, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1081001, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3730774, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1081001, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3730774, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1080938, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3405142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1080938, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3405142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1080938, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3405142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1080921, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3339899, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1080921, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3339899, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1080921, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3339899, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1080965, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3449595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1080965, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3449595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1080965, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3449595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1080913, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.33072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1080913, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.33072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1080913, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.33072, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1080982, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3557203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1080982, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3557203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1080982, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3557203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1080968, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3537202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1080968, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3537202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1080968, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3537202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1080985, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3577204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1080985, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3577204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1080985, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3577204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1080998, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3707204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1080998, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3707204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1080998, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3707204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1080981, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3547204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1080981, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3547204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1080981, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3547204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1080954, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3421192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1080954, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3421192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1080954, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3421192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1080932, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.337497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1080932, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.337497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1080932, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.337497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.125991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1080951, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3405142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.126002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1080951, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3405142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.126070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1080951, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3405142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.126084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1080923, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.336738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.126093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1080923, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.336738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.126104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1080923, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.336738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.126125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1080963, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3442042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.126136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1080963, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3442042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.126156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1080963, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3442042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.126166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1080992, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3667204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.126176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1080992, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3667204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.126186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1080992, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3667204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.126205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1080989, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3637204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.126216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1080989, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3637204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.126232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1080989, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3637204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.126242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1080915, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.33172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.126252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1080915, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.33172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.126262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1080915, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.33172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.126280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1080919, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3332505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.126297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1080919, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3332505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.126307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1080919, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3332505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.126317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1080978, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3547204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.126327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1080978, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3547204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.126337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1080978, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3547204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.126354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1080987, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3615735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.126381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1080987, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3615735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.126393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1080987, 'dev': 105, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1762003892.3615735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 14:29:22.126403 | orchestrator | 2025-11-01 14:29:22.126414 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-11-01 14:29:22.126426 | orchestrator | Saturday 01 November 2025 14:27:49 +0000 (0:00:38.163) 0:00:53.691 ***** 2025-11-01 14:29:22.126437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 14:29:22.126448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 14:29:22.126460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 14:29:22.126490 | orchestrator | 2025-11-01 14:29:22.126502 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-11-01 14:29:22.126513 | orchestrator | Saturday 01 November 2025 14:27:50 +0000 (0:00:01.072) 0:00:54.764 ***** 2025-11-01 14:29:22.126523 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:29:22.126540 | orchestrator | 2025-11-01 14:29:22.126555 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-11-01 14:29:22.126566 | orchestrator | Saturday 01 November 2025 14:27:52 +0000 (0:00:02.598) 0:00:57.362 ***** 2025-11-01 14:29:22.126576 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:29:22.126587 | orchestrator | 2025-11-01 14:29:22.126598 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-11-01 14:29:22.126609 | orchestrator | Saturday 01 November 2025 14:27:55 +0000 (0:00:02.528) 0:00:59.891 ***** 2025-11-01 14:29:22.126619 | orchestrator | 2025-11-01 14:29:22.126630 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-11-01 14:29:22.126646 | orchestrator | Saturday 01 November 2025 14:27:55 +0000 (0:00:00.085) 0:00:59.976 ***** 2025-11-01 14:29:22.126674 | orchestrator | 2025-11-01 14:29:22.126685 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-11-01 14:29:22.126696 | orchestrator | Saturday 01 November 2025 14:27:55 +0000 (0:00:00.085) 0:01:00.062 ***** 2025-11-01 14:29:22.126706 | orchestrator | 2025-11-01 14:29:22.126716 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-11-01 14:29:22.126725 | orchestrator | Saturday 01 November 2025 14:27:55 +0000 (0:00:00.262) 0:01:00.324 ***** 2025-11-01 14:29:22.126734 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:29:22.126744 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:29:22.126753 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:29:22.126763 | orchestrator | 2025-11-01 14:29:22.126772 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-11-01 14:29:22.126782 | orchestrator | Saturday 01 November 2025 14:27:57 +0000 (0:00:01.927) 0:01:02.251 ***** 2025-11-01 14:29:22.126791 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:29:22.126800 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:29:22.126810 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-11-01 14:29:22.126820 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-11-01 14:29:22.126829 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-11-01 14:29:22.126839 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:29:22.126848 | orchestrator | 2025-11-01 14:29:22.126858 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-11-01 14:29:22.126867 | orchestrator | Saturday 01 November 2025 14:28:37 +0000 (0:00:39.480) 0:01:41.732 ***** 2025-11-01 14:29:22.126877 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:29:22.126886 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:29:22.126896 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:29:22.126905 | orchestrator | 2025-11-01 14:29:22.126914 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-11-01 14:29:22.126924 | orchestrator | Saturday 01 November 2025 14:29:13 +0000 (0:00:36.757) 0:02:18.489 ***** 2025-11-01 14:29:22.126933 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:29:22.126943 | orchestrator | 2025-11-01 14:29:22.126952 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-11-01 14:29:22.126961 | orchestrator | Saturday 01 November 2025 14:29:16 +0000 (0:00:02.409) 0:02:20.898 ***** 2025-11-01 14:29:22.126971 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:29:22.126980 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:29:22.126990 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:29:22.126999 | orchestrator | 2025-11-01 14:29:22.127008 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-11-01 14:29:22.127018 | orchestrator | Saturday 01 November 2025 14:29:16 +0000 (0:00:00.625) 0:02:21.524 ***** 2025-11-01 14:29:22.127029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-11-01 14:29:22.127048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-11-01 14:29:22.127058 | orchestrator | 2025-11-01 14:29:22.127068 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-11-01 14:29:22.127077 | orchestrator | Saturday 01 November 2025 14:29:19 +0000 (0:00:02.590) 0:02:24.115 ***** 2025-11-01 14:29:22.127086 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:29:22.127096 | orchestrator | 2025-11-01 14:29:22.127105 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:29:22.127115 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-01 14:29:22.127125 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-01 14:29:22.127134 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-01 14:29:22.127144 | orchestrator | 2025-11-01 14:29:22.127153 | orchestrator | 2025-11-01 14:29:22.127163 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:29:22.127172 | orchestrator | Saturday 01 November 2025 14:29:19 +0000 (0:00:00.300) 0:02:24.416 ***** 2025-11-01 14:29:22.127182 | orchestrator | =============================================================================== 2025-11-01 14:29:22.127196 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 39.48s 2025-11-01 14:29:22.127206 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.16s 2025-11-01 14:29:22.127215 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 36.76s 2025-11-01 14:29:22.127224 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.60s 2025-11-01 14:29:22.127234 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.59s 2025-11-01 14:29:22.127248 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.53s 2025-11-01 14:29:22.127258 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.41s 2025-11-01 14:29:22.127267 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.93s 2025-11-01 14:29:22.127276 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.56s 2025-11-01 14:29:22.127286 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.45s 2025-11-01 14:29:22.127295 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.40s 2025-11-01 14:29:22.127305 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.30s 2025-11-01 14:29:22.127314 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.27s 2025-11-01 14:29:22.127323 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.07s 2025-11-01 14:29:22.127333 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.94s 2025-11-01 14:29:22.127342 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.87s 2025-11-01 14:29:22.127351 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.86s 2025-11-01 14:29:22.127361 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.81s 2025-11-01 14:29:22.127370 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.80s 2025-11-01 14:29:22.127379 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.75s 2025-11-01 14:29:22.127419 | orchestrator | 2025-11-01 14:29:22 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:29:22.127567 | orchestrator | 2025-11-01 14:29:22 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:29:22.128132 | orchestrator | 2025-11-01 14:29:22 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:29:22.128510 | orchestrator | 2025-11-01 14:29:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:29:25.176405 | orchestrator | 2025-11-01 14:29:25 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:29:25.178562 | orchestrator | 2025-11-01 14:29:25 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:29:25.180894 | orchestrator | 2025-11-01 14:29:25 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:29:25.180914 | orchestrator | 2025-11-01 14:29:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:29:28.217104 | orchestrator | 2025-11-01 14:29:28 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:29:28.218320 | orchestrator | 2025-11-01 14:29:28 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:29:28.222669 | orchestrator | 2025-11-01 14:29:28 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:29:28.222694 | orchestrator | 2025-11-01 14:29:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:29:31.274838 | orchestrator | 2025-11-01 14:29:31 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:29:31.277187 | orchestrator | 2025-11-01 14:29:31 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:29:31.279860 | orchestrator | 2025-11-01 14:29:31 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:29:31.279886 | orchestrator | 2025-11-01 14:29:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:29:34.322345 | orchestrator | 2025-11-01 14:29:34 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:29:34.323060 | orchestrator | 2025-11-01 14:29:34 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:29:34.324787 | orchestrator | 2025-11-01 14:29:34 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:29:34.324873 | orchestrator | 2025-11-01 14:29:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:29:37.373627 | orchestrator | 2025-11-01 14:29:37 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:29:37.375653 | orchestrator | 2025-11-01 14:29:37 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:29:37.381201 | orchestrator | 2025-11-01 14:29:37 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:29:37.383195 | orchestrator | 2025-11-01 14:29:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:29:40.426849 | orchestrator | 2025-11-01 14:29:40 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:29:40.427882 | orchestrator | 2025-11-01 14:29:40 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:29:40.429319 | orchestrator | 2025-11-01 14:29:40 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:29:40.429342 | orchestrator | 2025-11-01 14:29:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:29:43.480868 | orchestrator | 2025-11-01 14:29:43 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:29:43.484222 | orchestrator | 2025-11-01 14:29:43 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:29:43.487552 | orchestrator | 2025-11-01 14:29:43 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:29:43.487580 | orchestrator | 2025-11-01 14:29:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:29:46.533945 | orchestrator | 2025-11-01 14:29:46 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:29:46.536209 | orchestrator | 2025-11-01 14:29:46 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:29:46.538094 | orchestrator | 2025-11-01 14:29:46 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:29:46.538453 | orchestrator | 2025-11-01 14:29:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:29:49.581241 | orchestrator | 2025-11-01 14:29:49 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:29:49.582204 | orchestrator | 2025-11-01 14:29:49 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:29:49.583856 | orchestrator | 2025-11-01 14:29:49 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:29:49.583988 | orchestrator | 2025-11-01 14:29:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:29:52.642756 | orchestrator | 2025-11-01 14:29:52 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:29:52.647633 | orchestrator | 2025-11-01 14:29:52 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:29:52.650512 | orchestrator | 2025-11-01 14:29:52 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:29:52.650536 | orchestrator | 2025-11-01 14:29:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:29:55.697884 | orchestrator | 2025-11-01 14:29:55 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:29:55.697985 | orchestrator | 2025-11-01 14:29:55 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:29:55.700655 | orchestrator | 2025-11-01 14:29:55 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:29:55.702133 | orchestrator | 2025-11-01 14:29:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:29:58.748023 | orchestrator | 2025-11-01 14:29:58 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:29:58.752361 | orchestrator | 2025-11-01 14:29:58 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:29:58.754939 | orchestrator | 2025-11-01 14:29:58 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:29:58.754971 | orchestrator | 2025-11-01 14:29:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:30:01.802883 | orchestrator | 2025-11-01 14:30:01 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:30:01.802984 | orchestrator | 2025-11-01 14:30:01 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:30:01.804950 | orchestrator | 2025-11-01 14:30:01 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:30:01.804975 | orchestrator | 2025-11-01 14:30:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:30:04.854646 | orchestrator | 2025-11-01 14:30:04 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:30:04.854817 | orchestrator | 2025-11-01 14:30:04 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:30:04.855684 | orchestrator | 2025-11-01 14:30:04 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:30:04.855758 | orchestrator | 2025-11-01 14:30:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:30:07.893860 | orchestrator | 2025-11-01 14:30:07 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:30:07.894599 | orchestrator | 2025-11-01 14:30:07 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:30:07.895549 | orchestrator | 2025-11-01 14:30:07 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:30:07.895613 | orchestrator | 2025-11-01 14:30:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:30:10.943526 | orchestrator | 2025-11-01 14:30:10 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:30:10.945680 | orchestrator | 2025-11-01 14:30:10 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:30:10.947894 | orchestrator | 2025-11-01 14:30:10 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:30:10.947919 | orchestrator | 2025-11-01 14:30:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:30:13.991074 | orchestrator | 2025-11-01 14:30:13 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:30:13.991651 | orchestrator | 2025-11-01 14:30:13 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:30:13.994010 | orchestrator | 2025-11-01 14:30:13 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:30:13.994156 | orchestrator | 2025-11-01 14:30:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:30:17.036287 | orchestrator | 2025-11-01 14:30:17 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:30:17.038636 | orchestrator | 2025-11-01 14:30:17 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:30:17.041092 | orchestrator | 2025-11-01 14:30:17 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:30:17.041117 | orchestrator | 2025-11-01 14:30:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:30:20.115135 | orchestrator | 2025-11-01 14:30:20 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:30:20.118998 | orchestrator | 2025-11-01 14:30:20 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:30:20.119082 | orchestrator | 2025-11-01 14:30:20 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:30:20.119098 | orchestrator | 2025-11-01 14:30:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:30:23.162093 | orchestrator | 2025-11-01 14:30:23 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:30:23.163038 | orchestrator | 2025-11-01 14:30:23 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:30:23.164646 | orchestrator | 2025-11-01 14:30:23 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:30:23.164668 | orchestrator | 2025-11-01 14:30:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:30:26.203032 | orchestrator | 2025-11-01 14:30:26 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:30:26.203773 | orchestrator | 2025-11-01 14:30:26 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:30:26.204518 | orchestrator | 2025-11-01 14:30:26 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:30:26.204716 | orchestrator | 2025-11-01 14:30:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:30:29.240585 | orchestrator | 2025-11-01 14:30:29 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:30:29.241494 | orchestrator | 2025-11-01 14:30:29 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:30:29.243958 | orchestrator | 2025-11-01 14:30:29 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:30:29.244201 | orchestrator | 2025-11-01 14:30:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:30:32.293461 | orchestrator | 2025-11-01 14:30:32 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:30:32.294371 | orchestrator | 2025-11-01 14:30:32 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:30:32.296657 | orchestrator | 2025-11-01 14:30:32 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:30:32.296761 | orchestrator | 2025-11-01 14:30:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:30:35.337818 | orchestrator | 2025-11-01 14:30:35 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:30:35.337913 | orchestrator | 2025-11-01 14:30:35 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:30:35.339401 | orchestrator | 2025-11-01 14:30:35 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:30:35.339543 | orchestrator | 2025-11-01 14:30:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:30:38.381915 | orchestrator | 2025-11-01 14:30:38 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:30:38.382583 | orchestrator | 2025-11-01 14:30:38 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:30:38.384597 | orchestrator | 2025-11-01 14:30:38 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:30:38.384620 | orchestrator | 2025-11-01 14:30:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:30:41.426939 | orchestrator | 2025-11-01 14:30:41 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:30:41.428832 | orchestrator | 2025-11-01 14:30:41 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:30:41.430646 | orchestrator | 2025-11-01 14:30:41 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:30:41.431005 | orchestrator | 2025-11-01 14:30:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:30:44.481014 | orchestrator | 2025-11-01 14:30:44 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:30:44.481640 | orchestrator | 2025-11-01 14:30:44 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:30:44.482831 | orchestrator | 2025-11-01 14:30:44 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:30:44.483065 | orchestrator | 2025-11-01 14:30:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:30:47.539176 | orchestrator | 2025-11-01 14:30:47 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:30:47.541515 | orchestrator | 2025-11-01 14:30:47 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:30:47.544731 | orchestrator | 2025-11-01 14:30:47 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:30:47.544792 | orchestrator | 2025-11-01 14:30:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:30:50.587960 | orchestrator | 2025-11-01 14:30:50 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:30:50.588774 | orchestrator | 2025-11-01 14:30:50 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:30:50.591193 | orchestrator | 2025-11-01 14:30:50 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:30:50.591598 | orchestrator | 2025-11-01 14:30:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:30:53.639992 | orchestrator | 2025-11-01 14:30:53 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:30:53.640132 | orchestrator | 2025-11-01 14:30:53 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:30:53.640834 | orchestrator | 2025-11-01 14:30:53 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:30:53.640856 | orchestrator | 2025-11-01 14:30:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:30:56.688988 | orchestrator | 2025-11-01 14:30:56 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:30:56.690624 | orchestrator | 2025-11-01 14:30:56 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:30:56.692252 | orchestrator | 2025-11-01 14:30:56 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:30:56.692305 | orchestrator | 2025-11-01 14:30:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:30:59.741515 | orchestrator | 2025-11-01 14:30:59 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:30:59.744127 | orchestrator | 2025-11-01 14:30:59 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:30:59.746162 | orchestrator | 2025-11-01 14:30:59 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:30:59.746189 | orchestrator | 2025-11-01 14:30:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:31:02.800598 | orchestrator | 2025-11-01 14:31:02 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:31:02.803632 | orchestrator | 2025-11-01 14:31:02 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:31:02.807113 | orchestrator | 2025-11-01 14:31:02 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:31:02.807191 | orchestrator | 2025-11-01 14:31:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:31:05.859533 | orchestrator | 2025-11-01 14:31:05 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:31:05.861722 | orchestrator | 2025-11-01 14:31:05 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:31:05.864300 | orchestrator | 2025-11-01 14:31:05 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:31:05.864329 | orchestrator | 2025-11-01 14:31:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:31:08.903263 | orchestrator | 2025-11-01 14:31:08 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:31:08.903423 | orchestrator | 2025-11-01 14:31:08 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:31:08.905687 | orchestrator | 2025-11-01 14:31:08 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:31:08.906285 | orchestrator | 2025-11-01 14:31:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:31:11.945436 | orchestrator | 2025-11-01 14:31:11 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:31:11.945588 | orchestrator | 2025-11-01 14:31:11 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:31:11.945605 | orchestrator | 2025-11-01 14:31:11 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:31:11.945617 | orchestrator | 2025-11-01 14:31:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:31:14.995761 | orchestrator | 2025-11-01 14:31:14 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:31:14.996881 | orchestrator | 2025-11-01 14:31:14 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:31:14.997668 | orchestrator | 2025-11-01 14:31:14 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:31:14.997767 | orchestrator | 2025-11-01 14:31:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:31:18.033994 | orchestrator | 2025-11-01 14:31:18 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:31:18.034156 | orchestrator | 2025-11-01 14:31:18 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:31:18.035367 | orchestrator | 2025-11-01 14:31:18 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:31:18.035513 | orchestrator | 2025-11-01 14:31:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:31:21.076606 | orchestrator | 2025-11-01 14:31:21 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:31:21.076819 | orchestrator | 2025-11-01 14:31:21 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:31:21.077764 | orchestrator | 2025-11-01 14:31:21 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:31:21.077788 | orchestrator | 2025-11-01 14:31:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:31:24.135544 | orchestrator | 2025-11-01 14:31:24 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:31:24.137812 | orchestrator | 2025-11-01 14:31:24 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:31:24.140386 | orchestrator | 2025-11-01 14:31:24 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:31:24.140410 | orchestrator | 2025-11-01 14:31:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:31:27.184680 | orchestrator | 2025-11-01 14:31:27 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:31:27.184867 | orchestrator | 2025-11-01 14:31:27 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:31:27.186361 | orchestrator | 2025-11-01 14:31:27 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:31:27.186543 | orchestrator | 2025-11-01 14:31:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:31:30.232213 | orchestrator | 2025-11-01 14:31:30 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:31:30.235634 | orchestrator | 2025-11-01 14:31:30 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:31:30.239653 | orchestrator | 2025-11-01 14:31:30 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:31:30.239758 | orchestrator | 2025-11-01 14:31:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:31:33.288756 | orchestrator | 2025-11-01 14:31:33 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:31:33.288964 | orchestrator | 2025-11-01 14:31:33 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:31:33.290230 | orchestrator | 2025-11-01 14:31:33 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:31:33.290299 | orchestrator | 2025-11-01 14:31:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:31:36.331690 | orchestrator | 2025-11-01 14:31:36 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:31:36.331783 | orchestrator | 2025-11-01 14:31:36 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:31:36.331861 | orchestrator | 2025-11-01 14:31:36 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:31:36.331876 | orchestrator | 2025-11-01 14:31:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:31:39.363579 | orchestrator | 2025-11-01 14:31:39 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:31:39.364039 | orchestrator | 2025-11-01 14:31:39 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:31:39.365057 | orchestrator | 2025-11-01 14:31:39 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:31:39.365173 | orchestrator | 2025-11-01 14:31:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:31:42.404319 | orchestrator | 2025-11-01 14:31:42 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:31:42.404869 | orchestrator | 2025-11-01 14:31:42 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:31:42.406849 | orchestrator | 2025-11-01 14:31:42 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:31:42.407888 | orchestrator | 2025-11-01 14:31:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:31:45.476179 | orchestrator | 2025-11-01 14:31:45 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:31:45.480890 | orchestrator | 2025-11-01 14:31:45 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:31:45.484886 | orchestrator | 2025-11-01 14:31:45 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:31:45.484911 | orchestrator | 2025-11-01 14:31:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:31:48.522090 | orchestrator | 2025-11-01 14:31:48 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:31:48.523934 | orchestrator | 2025-11-01 14:31:48 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:31:48.526613 | orchestrator | 2025-11-01 14:31:48 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:31:48.526638 | orchestrator | 2025-11-01 14:31:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:31:51.568270 | orchestrator | 2025-11-01 14:31:51 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:31:51.569007 | orchestrator | 2025-11-01 14:31:51 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:31:51.570393 | orchestrator | 2025-11-01 14:31:51 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:31:51.570420 | orchestrator | 2025-11-01 14:31:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:31:54.611866 | orchestrator | 2025-11-01 14:31:54 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:31:54.613621 | orchestrator | 2025-11-01 14:31:54 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:31:54.616186 | orchestrator | 2025-11-01 14:31:54 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:31:54.616283 | orchestrator | 2025-11-01 14:31:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:31:57.655754 | orchestrator | 2025-11-01 14:31:57 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:31:57.659181 | orchestrator | 2025-11-01 14:31:57 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:31:57.661726 | orchestrator | 2025-11-01 14:31:57 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:31:57.662408 | orchestrator | 2025-11-01 14:31:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:32:00.709067 | orchestrator | 2025-11-01 14:32:00 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:32:00.710207 | orchestrator | 2025-11-01 14:32:00 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:32:00.712593 | orchestrator | 2025-11-01 14:32:00 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:32:00.712648 | orchestrator | 2025-11-01 14:32:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:32:03.758430 | orchestrator | 2025-11-01 14:32:03 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:32:03.760017 | orchestrator | 2025-11-01 14:32:03 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:32:03.761739 | orchestrator | 2025-11-01 14:32:03 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:32:03.761761 | orchestrator | 2025-11-01 14:32:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:32:06.815038 | orchestrator | 2025-11-01 14:32:06 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:32:06.816988 | orchestrator | 2025-11-01 14:32:06 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:32:06.819951 | orchestrator | 2025-11-01 14:32:06 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:32:06.820174 | orchestrator | 2025-11-01 14:32:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:32:09.869189 | orchestrator | 2025-11-01 14:32:09 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:32:09.869608 | orchestrator | 2025-11-01 14:32:09 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:32:09.871146 | orchestrator | 2025-11-01 14:32:09 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:32:09.871668 | orchestrator | 2025-11-01 14:32:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:32:12.929655 | orchestrator | 2025-11-01 14:32:12 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:32:12.931399 | orchestrator | 2025-11-01 14:32:12 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:32:12.934268 | orchestrator | 2025-11-01 14:32:12 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:32:12.934282 | orchestrator | 2025-11-01 14:32:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:32:15.975504 | orchestrator | 2025-11-01 14:32:15 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:32:15.977593 | orchestrator | 2025-11-01 14:32:15 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:32:15.979937 | orchestrator | 2025-11-01 14:32:15 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:32:15.979965 | orchestrator | 2025-11-01 14:32:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:32:19.036145 | orchestrator | 2025-11-01 14:32:19 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:32:19.037682 | orchestrator | 2025-11-01 14:32:19 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:32:19.037712 | orchestrator | 2025-11-01 14:32:19 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:32:19.037725 | orchestrator | 2025-11-01 14:32:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:32:22.080264 | orchestrator | 2025-11-01 14:32:22 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:32:22.081921 | orchestrator | 2025-11-01 14:32:22 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:32:22.083678 | orchestrator | 2025-11-01 14:32:22 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:32:22.083706 | orchestrator | 2025-11-01 14:32:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:32:25.267419 | orchestrator | 2025-11-01 14:32:25 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:32:25.269284 | orchestrator | 2025-11-01 14:32:25 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:32:25.272225 | orchestrator | 2025-11-01 14:32:25 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:32:25.272256 | orchestrator | 2025-11-01 14:32:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:32:28.318723 | orchestrator | 2025-11-01 14:32:28 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:32:28.320936 | orchestrator | 2025-11-01 14:32:28 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:32:28.323554 | orchestrator | 2025-11-01 14:32:28 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:32:28.323582 | orchestrator | 2025-11-01 14:32:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:32:31.363691 | orchestrator | 2025-11-01 14:32:31 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:32:31.364304 | orchestrator | 2025-11-01 14:32:31 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:32:31.365886 | orchestrator | 2025-11-01 14:32:31 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:32:31.365910 | orchestrator | 2025-11-01 14:32:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:32:34.400019 | orchestrator | 2025-11-01 14:32:34 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:32:34.400116 | orchestrator | 2025-11-01 14:32:34 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:32:34.404859 | orchestrator | 2025-11-01 14:32:34 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:32:34.405796 | orchestrator | 2025-11-01 14:32:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:32:37.449998 | orchestrator | 2025-11-01 14:32:37 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:32:37.452454 | orchestrator | 2025-11-01 14:32:37 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:32:37.453684 | orchestrator | 2025-11-01 14:32:37 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:32:37.453793 | orchestrator | 2025-11-01 14:32:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:32:40.484762 | orchestrator | 2025-11-01 14:32:40 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:32:40.486664 | orchestrator | 2025-11-01 14:32:40 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:32:40.488980 | orchestrator | 2025-11-01 14:32:40 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:32:40.489522 | orchestrator | 2025-11-01 14:32:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:32:43.532165 | orchestrator | 2025-11-01 14:32:43 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:32:43.532359 | orchestrator | 2025-11-01 14:32:43 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:32:43.533365 | orchestrator | 2025-11-01 14:32:43 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:32:43.533499 | orchestrator | 2025-11-01 14:32:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:32:46.588092 | orchestrator | 2025-11-01 14:32:46 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:32:46.589546 | orchestrator | 2025-11-01 14:32:46 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:32:46.592090 | orchestrator | 2025-11-01 14:32:46 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:32:46.592130 | orchestrator | 2025-11-01 14:32:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:32:49.665915 | orchestrator | 2025-11-01 14:32:49 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:32:49.666214 | orchestrator | 2025-11-01 14:32:49 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:32:49.666950 | orchestrator | 2025-11-01 14:32:49 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:32:49.667030 | orchestrator | 2025-11-01 14:32:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:32:52.706573 | orchestrator | 2025-11-01 14:32:52 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:32:52.708956 | orchestrator | 2025-11-01 14:32:52 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:32:52.710809 | orchestrator | 2025-11-01 14:32:52 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:32:52.710990 | orchestrator | 2025-11-01 14:32:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:32:55.769843 | orchestrator | 2025-11-01 14:32:55 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:32:55.770489 | orchestrator | 2025-11-01 14:32:55 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:32:55.771251 | orchestrator | 2025-11-01 14:32:55 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:32:55.771315 | orchestrator | 2025-11-01 14:32:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:32:58.819523 | orchestrator | 2025-11-01 14:32:58 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:32:58.820087 | orchestrator | 2025-11-01 14:32:58 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:32:58.821071 | orchestrator | 2025-11-01 14:32:58 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:32:58.821121 | orchestrator | 2025-11-01 14:32:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:33:01.871222 | orchestrator | 2025-11-01 14:33:01 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:33:01.872111 | orchestrator | 2025-11-01 14:33:01 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:33:01.873785 | orchestrator | 2025-11-01 14:33:01 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:33:01.873805 | orchestrator | 2025-11-01 14:33:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:33:04.923778 | orchestrator | 2025-11-01 14:33:04 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:33:04.925755 | orchestrator | 2025-11-01 14:33:04 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:33:04.927164 | orchestrator | 2025-11-01 14:33:04 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:33:04.927189 | orchestrator | 2025-11-01 14:33:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:33:07.974883 | orchestrator | 2025-11-01 14:33:07 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:33:07.975862 | orchestrator | 2025-11-01 14:33:07 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:33:07.977631 | orchestrator | 2025-11-01 14:33:07 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:33:07.978447 | orchestrator | 2025-11-01 14:33:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:33:11.030543 | orchestrator | 2025-11-01 14:33:11 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:33:11.031408 | orchestrator | 2025-11-01 14:33:11 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:33:11.032196 | orchestrator | 2025-11-01 14:33:11 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:33:11.032305 | orchestrator | 2025-11-01 14:33:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:33:14.086267 | orchestrator | 2025-11-01 14:33:14 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:33:14.088604 | orchestrator | 2025-11-01 14:33:14 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:33:14.089807 | orchestrator | 2025-11-01 14:33:14 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:33:14.089866 | orchestrator | 2025-11-01 14:33:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:33:17.125698 | orchestrator | 2025-11-01 14:33:17 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:33:17.126218 | orchestrator | 2025-11-01 14:33:17 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:33:17.127183 | orchestrator | 2025-11-01 14:33:17 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:33:17.127344 | orchestrator | 2025-11-01 14:33:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:33:20.171802 | orchestrator | 2025-11-01 14:33:20 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:33:20.174794 | orchestrator | 2025-11-01 14:33:20 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:33:20.175647 | orchestrator | 2025-11-01 14:33:20 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:33:20.175767 | orchestrator | 2025-11-01 14:33:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:33:23.216295 | orchestrator | 2025-11-01 14:33:23 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:33:23.216512 | orchestrator | 2025-11-01 14:33:23 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:33:23.218225 | orchestrator | 2025-11-01 14:33:23 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:33:23.218251 | orchestrator | 2025-11-01 14:33:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:33:26.255071 | orchestrator | 2025-11-01 14:33:26 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:33:26.256704 | orchestrator | 2025-11-01 14:33:26 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:33:26.258510 | orchestrator | 2025-11-01 14:33:26 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:33:26.258539 | orchestrator | 2025-11-01 14:33:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:33:29.308712 | orchestrator | 2025-11-01 14:33:29 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:33:29.310529 | orchestrator | 2025-11-01 14:33:29 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:33:29.313131 | orchestrator | 2025-11-01 14:33:29 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:33:29.313152 | orchestrator | 2025-11-01 14:33:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:33:32.351630 | orchestrator | 2025-11-01 14:33:32 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:33:32.356054 | orchestrator | 2025-11-01 14:33:32 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:33:32.360102 | orchestrator | 2025-11-01 14:33:32 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:33:32.360655 | orchestrator | 2025-11-01 14:33:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:33:35.399891 | orchestrator | 2025-11-01 14:33:35 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:33:35.401197 | orchestrator | 2025-11-01 14:33:35 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:33:35.402840 | orchestrator | 2025-11-01 14:33:35 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:33:35.402926 | orchestrator | 2025-11-01 14:33:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:33:38.451384 | orchestrator | 2025-11-01 14:33:38 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:33:38.452731 | orchestrator | 2025-11-01 14:33:38 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:33:38.454646 | orchestrator | 2025-11-01 14:33:38 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:33:38.454654 | orchestrator | 2025-11-01 14:33:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:33:41.494386 | orchestrator | 2025-11-01 14:33:41 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:33:41.495833 | orchestrator | 2025-11-01 14:33:41 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:33:41.497799 | orchestrator | 2025-11-01 14:33:41 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:33:41.497997 | orchestrator | 2025-11-01 14:33:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:33:44.559904 | orchestrator | 2025-11-01 14:33:44 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:33:44.560362 | orchestrator | 2025-11-01 14:33:44 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:33:44.563737 | orchestrator | 2025-11-01 14:33:44 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:33:44.563748 | orchestrator | 2025-11-01 14:33:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:33:47.613737 | orchestrator | 2025-11-01 14:33:47 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:33:47.615925 | orchestrator | 2025-11-01 14:33:47 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:33:47.617637 | orchestrator | 2025-11-01 14:33:47 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:33:47.618112 | orchestrator | 2025-11-01 14:33:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:33:50.663415 | orchestrator | 2025-11-01 14:33:50 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:33:50.666097 | orchestrator | 2025-11-01 14:33:50 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:33:50.668712 | orchestrator | 2025-11-01 14:33:50 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:33:50.668738 | orchestrator | 2025-11-01 14:33:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:33:53.709967 | orchestrator | 2025-11-01 14:33:53 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:33:53.711653 | orchestrator | 2025-11-01 14:33:53 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:33:53.712815 | orchestrator | 2025-11-01 14:33:53 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:33:53.712964 | orchestrator | 2025-11-01 14:33:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:33:56.762226 | orchestrator | 2025-11-01 14:33:56 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:33:56.764651 | orchestrator | 2025-11-01 14:33:56 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:33:56.766915 | orchestrator | 2025-11-01 14:33:56 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:33:56.766941 | orchestrator | 2025-11-01 14:33:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:33:59.804441 | orchestrator | 2025-11-01 14:33:59 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:33:59.804700 | orchestrator | 2025-11-01 14:33:59 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:33:59.806308 | orchestrator | 2025-11-01 14:33:59 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:33:59.806334 | orchestrator | 2025-11-01 14:33:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:34:02.859245 | orchestrator | 2025-11-01 14:34:02 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:34:02.860318 | orchestrator | 2025-11-01 14:34:02 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state STARTED 2025-11-01 14:34:02.864286 | orchestrator | 2025-11-01 14:34:02 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:34:02.864921 | orchestrator | 2025-11-01 14:34:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:34:05.911305 | orchestrator | 2025-11-01 14:34:05 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:34:05.917628 | orchestrator | 2025-11-01 14:34:05 | INFO  | Task 0bc021cb-15ac-40f0-a0bd-c0335b9c2812 is in state SUCCESS 2025-11-01 14:34:05.920470 | orchestrator | 2025-11-01 14:34:05.920506 | orchestrator | 2025-11-01 14:34:05.920519 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 14:34:05.920530 | orchestrator | 2025-11-01 14:34:05.920541 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-11-01 14:34:05.920606 | orchestrator | Saturday 01 November 2025 14:24:45 +0000 (0:00:00.287) 0:00:00.287 ***** 2025-11-01 14:34:05.920619 | orchestrator | changed: [testbed-manager] 2025-11-01 14:34:05.920631 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:05.920642 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:34:05.920653 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:34:05.920663 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:34:05.920674 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:34:05.920685 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:34:05.920696 | orchestrator | 2025-11-01 14:34:05.920706 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 14:34:05.920718 | orchestrator | Saturday 01 November 2025 14:24:46 +0000 (0:00:00.862) 0:00:01.150 ***** 2025-11-01 14:34:05.920728 | orchestrator | changed: [testbed-manager] 2025-11-01 14:34:05.920739 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:05.920750 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:34:05.920760 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:34:05.920786 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:34:05.920797 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:34:05.920808 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:34:05.920819 | orchestrator | 2025-11-01 14:34:05.920830 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 14:34:05.920841 | orchestrator | Saturday 01 November 2025 14:24:47 +0000 (0:00:01.075) 0:00:02.226 ***** 2025-11-01 14:34:05.920851 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-11-01 14:34:05.920862 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-11-01 14:34:05.920873 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-11-01 14:34:05.920884 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-11-01 14:34:05.920894 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-11-01 14:34:05.920931 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-11-01 14:34:05.921104 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-11-01 14:34:05.921117 | orchestrator | 2025-11-01 14:34:05.921130 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-11-01 14:34:05.921142 | orchestrator | 2025-11-01 14:34:05.921154 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-11-01 14:34:05.921167 | orchestrator | Saturday 01 November 2025 14:24:49 +0000 (0:00:01.427) 0:00:03.653 ***** 2025-11-01 14:34:05.921179 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:34:05.921191 | orchestrator | 2025-11-01 14:34:05.921203 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-11-01 14:34:05.921215 | orchestrator | Saturday 01 November 2025 14:24:50 +0000 (0:00:00.992) 0:00:04.645 ***** 2025-11-01 14:34:05.921229 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-11-01 14:34:05.921255 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-11-01 14:34:05.921269 | orchestrator | 2025-11-01 14:34:05.921281 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-11-01 14:34:05.921293 | orchestrator | Saturday 01 November 2025 14:24:54 +0000 (0:00:04.968) 0:00:09.614 ***** 2025-11-01 14:34:05.921306 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-01 14:34:05.921318 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-01 14:34:05.921330 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:05.921356 | orchestrator | 2025-11-01 14:34:05.921369 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-11-01 14:34:05.921381 | orchestrator | Saturday 01 November 2025 14:24:59 +0000 (0:00:04.716) 0:00:14.330 ***** 2025-11-01 14:34:05.921394 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:05.921406 | orchestrator | 2025-11-01 14:34:05.921418 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-11-01 14:34:05.921429 | orchestrator | Saturday 01 November 2025 14:25:00 +0000 (0:00:00.812) 0:00:15.143 ***** 2025-11-01 14:34:05.921439 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:05.921501 | orchestrator | 2025-11-01 14:34:05.921512 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-11-01 14:34:05.921523 | orchestrator | Saturday 01 November 2025 14:25:01 +0000 (0:00:01.452) 0:00:16.596 ***** 2025-11-01 14:34:05.921533 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:05.921544 | orchestrator | 2025-11-01 14:34:05.921555 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-11-01 14:34:05.921566 | orchestrator | Saturday 01 November 2025 14:25:04 +0000 (0:00:02.725) 0:00:19.321 ***** 2025-11-01 14:34:05.921576 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.921587 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.921598 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.921608 | orchestrator | 2025-11-01 14:34:05.921619 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-11-01 14:34:05.921630 | orchestrator | Saturday 01 November 2025 14:25:05 +0000 (0:00:00.351) 0:00:19.672 ***** 2025-11-01 14:34:05.921640 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:34:05.921651 | orchestrator | 2025-11-01 14:34:05.921662 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-11-01 14:34:05.921673 | orchestrator | Saturday 01 November 2025 14:25:40 +0000 (0:00:35.599) 0:00:55.272 ***** 2025-11-01 14:34:05.921683 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:05.921695 | orchestrator | 2025-11-01 14:34:05.921705 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-11-01 14:34:05.921752 | orchestrator | Saturday 01 November 2025 14:25:57 +0000 (0:00:16.978) 0:01:12.250 ***** 2025-11-01 14:34:05.921764 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:34:05.921775 | orchestrator | 2025-11-01 14:34:05.921786 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-11-01 14:34:05.921797 | orchestrator | Saturday 01 November 2025 14:26:12 +0000 (0:00:15.375) 0:01:27.625 ***** 2025-11-01 14:34:05.921821 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:34:05.921833 | orchestrator | 2025-11-01 14:34:05.921844 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-11-01 14:34:05.921854 | orchestrator | Saturday 01 November 2025 14:26:14 +0000 (0:00:01.227) 0:01:28.853 ***** 2025-11-01 14:34:05.921865 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.921876 | orchestrator | 2025-11-01 14:34:05.921886 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-11-01 14:34:05.921897 | orchestrator | Saturday 01 November 2025 14:26:14 +0000 (0:00:00.488) 0:01:29.341 ***** 2025-11-01 14:34:05.921908 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:34:05.921919 | orchestrator | 2025-11-01 14:34:05.921930 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-11-01 14:34:05.921940 | orchestrator | Saturday 01 November 2025 14:26:15 +0000 (0:00:00.528) 0:01:29.870 ***** 2025-11-01 14:34:05.921951 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:34:05.921962 | orchestrator | 2025-11-01 14:34:05.921972 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-11-01 14:34:05.921990 | orchestrator | Saturday 01 November 2025 14:26:35 +0000 (0:00:20.355) 0:01:50.225 ***** 2025-11-01 14:34:05.922001 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.922012 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.922086 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.922098 | orchestrator | 2025-11-01 14:34:05.922108 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-11-01 14:34:05.922119 | orchestrator | 2025-11-01 14:34:05.922130 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-11-01 14:34:05.922140 | orchestrator | Saturday 01 November 2025 14:26:35 +0000 (0:00:00.339) 0:01:50.564 ***** 2025-11-01 14:34:05.922151 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:34:05.922162 | orchestrator | 2025-11-01 14:34:05.922173 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-11-01 14:34:05.922183 | orchestrator | Saturday 01 November 2025 14:26:36 +0000 (0:00:00.618) 0:01:51.182 ***** 2025-11-01 14:34:05.922194 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.922205 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.922215 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:05.922226 | orchestrator | 2025-11-01 14:34:05.922237 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-11-01 14:34:05.922248 | orchestrator | Saturday 01 November 2025 14:26:38 +0000 (0:00:02.363) 0:01:53.546 ***** 2025-11-01 14:34:05.922259 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.922270 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.922280 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:05.922291 | orchestrator | 2025-11-01 14:34:05.922302 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-11-01 14:34:05.922312 | orchestrator | Saturday 01 November 2025 14:26:41 +0000 (0:00:02.788) 0:01:56.335 ***** 2025-11-01 14:34:05.922323 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.922334 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.922345 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.922355 | orchestrator | 2025-11-01 14:34:05.922366 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-11-01 14:34:05.922377 | orchestrator | Saturday 01 November 2025 14:26:42 +0000 (0:00:00.396) 0:01:56.731 ***** 2025-11-01 14:34:05.922387 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-11-01 14:34:05.922398 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.922409 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-11-01 14:34:05.922419 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.922430 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-11-01 14:34:05.922441 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-11-01 14:34:05.922469 | orchestrator | 2025-11-01 14:34:05.922480 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-11-01 14:34:05.922490 | orchestrator | Saturday 01 November 2025 14:26:51 +0000 (0:00:09.521) 0:02:06.253 ***** 2025-11-01 14:34:05.922501 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.922512 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.922523 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.922534 | orchestrator | 2025-11-01 14:34:05.922544 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-11-01 14:34:05.922555 | orchestrator | Saturday 01 November 2025 14:26:51 +0000 (0:00:00.338) 0:02:06.592 ***** 2025-11-01 14:34:05.922566 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-11-01 14:34:05.922576 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.922587 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-11-01 14:34:05.922598 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.922608 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-11-01 14:34:05.922619 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.922630 | orchestrator | 2025-11-01 14:34:05.922641 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-11-01 14:34:05.922651 | orchestrator | Saturday 01 November 2025 14:26:52 +0000 (0:00:00.738) 0:02:07.330 ***** 2025-11-01 14:34:05.922662 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.922680 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.922691 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:05.922702 | orchestrator | 2025-11-01 14:34:05.922712 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-11-01 14:34:05.922723 | orchestrator | Saturday 01 November 2025 14:26:53 +0000 (0:00:00.715) 0:02:08.046 ***** 2025-11-01 14:34:05.922734 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.922745 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.922755 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:05.922766 | orchestrator | 2025-11-01 14:34:05.922777 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-11-01 14:34:05.922787 | orchestrator | Saturday 01 November 2025 14:26:54 +0000 (0:00:01.016) 0:02:09.062 ***** 2025-11-01 14:34:05.922845 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.922867 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.922901 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:05.922956 | orchestrator | 2025-11-01 14:34:05.922977 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-11-01 14:34:05.922988 | orchestrator | Saturday 01 November 2025 14:26:56 +0000 (0:00:02.514) 0:02:11.576 ***** 2025-11-01 14:34:05.923019 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.923030 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.923040 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:34:05.923051 | orchestrator | 2025-11-01 14:34:05.923062 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-11-01 14:34:05.923073 | orchestrator | Saturday 01 November 2025 14:27:20 +0000 (0:00:23.722) 0:02:35.299 ***** 2025-11-01 14:34:05.923084 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.923094 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.923105 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:34:05.923116 | orchestrator | 2025-11-01 14:34:05.923127 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-11-01 14:34:05.923137 | orchestrator | Saturday 01 November 2025 14:27:35 +0000 (0:00:14.694) 0:02:49.994 ***** 2025-11-01 14:34:05.923148 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:34:05.923159 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.923176 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.923187 | orchestrator | 2025-11-01 14:34:05.923198 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-11-01 14:34:05.923209 | orchestrator | Saturday 01 November 2025 14:27:36 +0000 (0:00:01.329) 0:02:51.323 ***** 2025-11-01 14:34:05.923220 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.923231 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.923241 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:05.923252 | orchestrator | 2025-11-01 14:34:05.923263 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-11-01 14:34:05.923273 | orchestrator | Saturday 01 November 2025 14:27:50 +0000 (0:00:14.107) 0:03:05.430 ***** 2025-11-01 14:34:05.923284 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.923295 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.923305 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.923316 | orchestrator | 2025-11-01 14:34:05.923327 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-11-01 14:34:05.923338 | orchestrator | Saturday 01 November 2025 14:27:51 +0000 (0:00:01.168) 0:03:06.599 ***** 2025-11-01 14:34:05.923348 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.923359 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.923370 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.923380 | orchestrator | 2025-11-01 14:34:05.923391 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-11-01 14:34:05.923402 | orchestrator | 2025-11-01 14:34:05.923413 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-11-01 14:34:05.923423 | orchestrator | Saturday 01 November 2025 14:27:52 +0000 (0:00:00.566) 0:03:07.165 ***** 2025-11-01 14:34:05.923475 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:34:05.923489 | orchestrator | 2025-11-01 14:34:05.923500 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-11-01 14:34:05.923510 | orchestrator | Saturday 01 November 2025 14:27:53 +0000 (0:00:00.623) 0:03:07.789 ***** 2025-11-01 14:34:05.923521 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-11-01 14:34:05.923532 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-11-01 14:34:05.923543 | orchestrator | 2025-11-01 14:34:05.923554 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-11-01 14:34:05.923565 | orchestrator | Saturday 01 November 2025 14:27:57 +0000 (0:00:04.084) 0:03:11.873 ***** 2025-11-01 14:34:05.923575 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-11-01 14:34:05.923587 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-11-01 14:34:05.923598 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-11-01 14:34:05.923609 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-11-01 14:34:05.923619 | orchestrator | 2025-11-01 14:34:05.923630 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-11-01 14:34:05.923641 | orchestrator | Saturday 01 November 2025 14:28:04 +0000 (0:00:07.238) 0:03:19.112 ***** 2025-11-01 14:34:05.923652 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-01 14:34:05.923662 | orchestrator | 2025-11-01 14:34:05.923673 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-11-01 14:34:05.923684 | orchestrator | Saturday 01 November 2025 14:28:08 +0000 (0:00:03.573) 0:03:22.686 ***** 2025-11-01 14:34:05.923695 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-01 14:34:05.923705 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-11-01 14:34:05.923716 | orchestrator | 2025-11-01 14:34:05.923727 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-11-01 14:34:05.923737 | orchestrator | Saturday 01 November 2025 14:28:12 +0000 (0:00:04.481) 0:03:27.167 ***** 2025-11-01 14:34:05.923748 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-01 14:34:05.923758 | orchestrator | 2025-11-01 14:34:05.923769 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-11-01 14:34:05.923780 | orchestrator | Saturday 01 November 2025 14:28:15 +0000 (0:00:03.372) 0:03:30.540 ***** 2025-11-01 14:34:05.923790 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-11-01 14:34:05.923801 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-11-01 14:34:05.923811 | orchestrator | 2025-11-01 14:34:05.923822 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-11-01 14:34:05.923840 | orchestrator | Saturday 01 November 2025 14:28:24 +0000 (0:00:08.253) 0:03:38.793 ***** 2025-11-01 14:34:05.923863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 14:34:05.923889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 14:34:05.923903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 14:34:05.923924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.923943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.923962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.923974 | orchestrator | 2025-11-01 14:34:05.923985 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-11-01 14:34:05.923996 | orchestrator | Saturday 01 November 2025 14:28:25 +0000 (0:00:01.363) 0:03:40.157 ***** 2025-11-01 14:34:05.924007 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.924018 | orchestrator | 2025-11-01 14:34:05.924029 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-11-01 14:34:05.924039 | orchestrator | Saturday 01 November 2025 14:28:25 +0000 (0:00:00.124) 0:03:40.281 ***** 2025-11-01 14:34:05.924050 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.924061 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.924071 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.924082 | orchestrator | 2025-11-01 14:34:05.924093 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-11-01 14:34:05.924103 | orchestrator | Saturday 01 November 2025 14:28:25 +0000 (0:00:00.312) 0:03:40.594 ***** 2025-11-01 14:34:05.924114 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-01 14:34:05.924125 | orchestrator | 2025-11-01 14:34:05.924135 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-11-01 14:34:05.924146 | orchestrator | Saturday 01 November 2025 14:28:26 +0000 (0:00:01.005) 0:03:41.599 ***** 2025-11-01 14:34:05.924157 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.924167 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.924178 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.924189 | orchestrator | 2025-11-01 14:34:05.924199 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-11-01 14:34:05.924210 | orchestrator | Saturday 01 November 2025 14:28:27 +0000 (0:00:00.351) 0:03:41.951 ***** 2025-11-01 14:34:05.924221 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:34:05.924232 | orchestrator | 2025-11-01 14:34:05.924242 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-11-01 14:34:05.924253 | orchestrator | Saturday 01 November 2025 14:28:27 +0000 (0:00:00.588) 0:03:42.540 ***** 2025-11-01 14:34:05.924265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 14:34:05.924297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 14:34:05.924312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 14:34:05.924324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.924336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.924354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.924379 | orchestrator | 2025-11-01 14:34:05.924390 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-11-01 14:34:05.924401 | orchestrator | Saturday 01 November 2025 14:28:30 +0000 (0:00:02.725) 0:03:45.265 ***** 2025-11-01 14:34:05.924418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-01 14:34:05.924431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:34:05.924443 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.924470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-01 14:34:05.924483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:34:05.924502 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.924526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-01 14:34:05.924540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:34:05.924551 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.924562 | orchestrator | 2025-11-01 14:34:05.924573 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-11-01 14:34:05.924584 | orchestrator | Saturday 01 November 2025 14:28:31 +0000 (0:00:00.918) 0:03:46.184 ***** 2025-11-01 14:34:05.924596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-01 14:34:05.924608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:34:05.924626 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.924651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-01 14:34:05.924663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:34:05.924675 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.924686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-01 14:34:05.924698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:34:05.924717 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.924728 | orchestrator | 2025-11-01 14:34:05.924739 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-11-01 14:34:05.924750 | orchestrator | Saturday 01 November 2025 14:28:32 +0000 (0:00:00.877) 0:03:47.062 ***** 2025-11-01 14:34:05.924768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 14:34:05.924786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 14:34:05.924799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 14:34:05.924817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.924837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.924849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.924860 | orchestrator | 2025-11-01 14:34:05.924871 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-11-01 14:34:05.924887 | orchestrator | Saturday 01 November 2025 14:28:34 +0000 (0:00:02.408) 0:03:49.470 ***** 2025-11-01 14:34:05.924898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 14:34:05.924911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 14:34:05.924941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 14:34:05.924960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.924972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.924984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.924995 | orchestrator | 2025-11-01 14:34:05.925006 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-11-01 14:34:05.925017 | orchestrator | Saturday 01 November 2025 14:28:41 +0000 (0:00:07.049) 0:03:56.520 ***** 2025-11-01 14:34:05.925028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-01 14:34:05.925052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:34:05.925064 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.925081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-01 14:34:05.925093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:34:05.925104 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.925115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-01 14:34:05.925133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 14:34:05.925144 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.925155 | orchestrator | 2025-11-01 14:34:05.925166 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-11-01 14:34:05.925177 | orchestrator | Saturday 01 November 2025 14:28:42 +0000 (0:00:00.740) 0:03:57.260 ***** 2025-11-01 14:34:05.925188 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:34:05.925199 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:05.925209 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:34:05.925220 | orchestrator | 2025-11-01 14:34:05.925237 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-11-01 14:34:05.925248 | orchestrator | Saturday 01 November 2025 14:28:44 +0000 (0:00:01.636) 0:03:58.897 ***** 2025-11-01 14:34:05.925259 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.925269 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.925280 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.925291 | orchestrator | 2025-11-01 14:34:05.925301 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-11-01 14:34:05.925312 | orchestrator | Saturday 01 November 2025 14:28:44 +0000 (0:00:00.321) 0:03:59.219 ***** 2025-11-01 14:34:05.925331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 14:34:05.925344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 14:34:05.925369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 14:34:05.925382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.925398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.925410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.925427 | orchestrator | 2025-11-01 14:34:05.925439 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-11-01 14:34:05.925468 | orchestrator | Saturday 01 November 2025 14:28:46 +0000 (0:00:02.304) 0:04:01.523 ***** 2025-11-01 14:34:05.925479 | orchestrator | 2025-11-01 14:34:05.925489 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-11-01 14:34:05.925500 | orchestrator | Saturday 01 November 2025 14:28:47 +0000 (0:00:00.152) 0:04:01.676 ***** 2025-11-01 14:34:05.925511 | orchestrator | 2025-11-01 14:34:05.925521 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-11-01 14:34:05.925532 | orchestrator | Saturday 01 November 2025 14:28:47 +0000 (0:00:00.186) 0:04:01.862 ***** 2025-11-01 14:34:05.925543 | orchestrator | 2025-11-01 14:34:05.925553 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-11-01 14:34:05.925564 | orchestrator | Saturday 01 November 2025 14:28:47 +0000 (0:00:00.177) 0:04:02.040 ***** 2025-11-01 14:34:05.925575 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:05.925585 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:34:05.925596 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:34:05.925607 | orchestrator | 2025-11-01 14:34:05.925617 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-11-01 14:34:05.925628 | orchestrator | Saturday 01 November 2025 14:29:08 +0000 (0:00:21.537) 0:04:23.578 ***** 2025-11-01 14:34:05.925639 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:05.925649 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:34:05.925660 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:34:05.925671 | orchestrator | 2025-11-01 14:34:05.925681 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-11-01 14:34:05.925692 | orchestrator | 2025-11-01 14:34:05.925703 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-11-01 14:34:05.925713 | orchestrator | Saturday 01 November 2025 14:29:15 +0000 (0:00:06.109) 0:04:29.687 ***** 2025-11-01 14:34:05.925724 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:34:05.925735 | orchestrator | 2025-11-01 14:34:05.925746 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-11-01 14:34:05.925756 | orchestrator | Saturday 01 November 2025 14:29:16 +0000 (0:00:01.297) 0:04:30.984 ***** 2025-11-01 14:34:05.925767 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:34:05.925778 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:34:05.925788 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:34:05.925799 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.925810 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.925820 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.925831 | orchestrator | 2025-11-01 14:34:05.925841 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-11-01 14:34:05.925852 | orchestrator | Saturday 01 November 2025 14:29:16 +0000 (0:00:00.647) 0:04:31.632 ***** 2025-11-01 14:34:05.925863 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.925874 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.925884 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.925895 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:34:05.925906 | orchestrator | 2025-11-01 14:34:05.925917 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-11-01 14:34:05.925933 | orchestrator | Saturday 01 November 2025 14:29:18 +0000 (0:00:01.258) 0:04:32.891 ***** 2025-11-01 14:34:05.925944 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-11-01 14:34:05.925955 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-11-01 14:34:05.925966 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-11-01 14:34:05.925976 | orchestrator | 2025-11-01 14:34:05.925987 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-11-01 14:34:05.926005 | orchestrator | Saturday 01 November 2025 14:29:18 +0000 (0:00:00.721) 0:04:33.612 ***** 2025-11-01 14:34:05.926041 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-11-01 14:34:05.926054 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-11-01 14:34:05.926065 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-11-01 14:34:05.926076 | orchestrator | 2025-11-01 14:34:05.926087 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-11-01 14:34:05.926098 | orchestrator | Saturday 01 November 2025 14:29:20 +0000 (0:00:01.764) 0:04:35.377 ***** 2025-11-01 14:34:05.926109 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-11-01 14:34:05.926120 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:34:05.926136 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-11-01 14:34:05.926147 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:34:05.926157 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-11-01 14:34:05.926168 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:34:05.926179 | orchestrator | 2025-11-01 14:34:05.926190 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-11-01 14:34:05.926200 | orchestrator | Saturday 01 November 2025 14:29:21 +0000 (0:00:00.584) 0:04:35.962 ***** 2025-11-01 14:34:05.926211 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-01 14:34:05.926222 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-01 14:34:05.926233 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.926243 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-01 14:34:05.926254 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-01 14:34:05.926265 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.926276 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-11-01 14:34:05.926287 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-01 14:34:05.926297 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-11-01 14:34:05.926308 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-01 14:34:05.926319 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.926330 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-11-01 14:34:05.926340 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-11-01 14:34:05.926351 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-11-01 14:34:05.926362 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-11-01 14:34:05.926373 | orchestrator | 2025-11-01 14:34:05.926383 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-11-01 14:34:05.926394 | orchestrator | Saturday 01 November 2025 14:29:22 +0000 (0:00:01.344) 0:04:37.306 ***** 2025-11-01 14:34:05.926405 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.926415 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.926426 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.926437 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:34:05.926465 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:34:05.926476 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:34:05.926487 | orchestrator | 2025-11-01 14:34:05.926498 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-11-01 14:34:05.926508 | orchestrator | Saturday 01 November 2025 14:29:23 +0000 (0:00:01.194) 0:04:38.500 ***** 2025-11-01 14:34:05.926519 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.926530 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.926540 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.926551 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:34:05.926568 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:34:05.926579 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:34:05.926590 | orchestrator | 2025-11-01 14:34:05.926600 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-11-01 14:34:05.926611 | orchestrator | Saturday 01 November 2025 14:29:25 +0000 (0:00:01.724) 0:04:40.225 ***** 2025-11-01 14:34:05.926623 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-01 14:34:05.926648 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-01 14:34:05.926661 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-01 14:34:05.926673 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-01 14:34:05.926685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-01 14:34:05.926703 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-01 14:34:05.926715 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-01 14:34:05.926732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-01 14:34:05.926749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-01 14:34:05.926761 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.926773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.926792 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.927091 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.927108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.927126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.927137 | orchestrator | 2025-11-01 14:34:05.927148 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-11-01 14:34:05.927159 | orchestrator | Saturday 01 November 2025 14:29:28 +0000 (0:00:02.613) 0:04:42.839 ***** 2025-11-01 14:34:05.927170 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:34:05.927183 | orchestrator | 2025-11-01 14:34:05.927194 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-11-01 14:34:05.927204 | orchestrator | Saturday 01 November 2025 14:29:29 +0000 (0:00:01.354) 0:04:44.193 ***** 2025-11-01 14:34:05.927216 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-01 14:34:05.927236 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-01 14:34:05.927254 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-01 14:34:05.927266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-01 14:34:05.927282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-01 14:34:05.927294 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-01 14:34:05.927305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-01 14:34:05.927323 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-01 14:34:05.927334 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-01 14:34:05.927352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.927363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.927383 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.927395 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.927413 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.927425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.927436 | orchestrator | 2025-11-01 14:34:05.927571 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-11-01 14:34:05.927585 | orchestrator | Saturday 01 November 2025 14:29:33 +0000 (0:00:03.669) 0:04:47.862 ***** 2025-11-01 14:34:05.927627 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-01 14:34:05.927646 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-01 14:34:05.927673 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-01 14:34:05.927737 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-01 14:34:05.927751 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:34:05.927763 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-01 14:34:05.927782 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-01 14:34:05.927794 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:34:05.927810 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-01 14:34:05.927822 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-01 14:34:05.927841 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-01 14:34:05.927852 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:34:05.927863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-01 14:34:05.927875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-01 14:34:05.927886 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.927904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-01 14:34:05.927920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-01 14:34:05.927932 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.927943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-01 14:34:05.927961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-01 14:34:05.927972 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.927983 | orchestrator | 2025-11-01 14:34:05.927994 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-11-01 14:34:05.928005 | orchestrator | Saturday 01 November 2025 14:29:34 +0000 (0:00:01.768) 0:04:49.631 ***** 2025-11-01 14:34:05.928016 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-01 14:34:05.928028 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-01 14:34:05.928046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-01 14:34:05.928057 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:34:05.928072 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-01 14:34:05.928089 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-01 14:34:05.928099 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-01 14:34:05.928109 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-01 14:34:05.928119 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:34:05.928134 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-01 14:34:05.928149 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-01 14:34:05.928165 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:34:05.928175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-01 14:34:05.928185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-01 14:34:05.928195 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.928205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-01 14:34:05.928215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-01 14:34:05.928225 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.928234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-01 14:34:05.928250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-01 14:34:05.928266 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.928276 | orchestrator | 2025-11-01 14:34:05.928285 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-11-01 14:34:05.928295 | orchestrator | Saturday 01 November 2025 14:29:37 +0000 (0:00:02.673) 0:04:52.305 ***** 2025-11-01 14:34:05.928305 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.928315 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.928324 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.928338 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 14:34:05.928348 | orchestrator | 2025-11-01 14:34:05.928358 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-11-01 14:34:05.928367 | orchestrator | Saturday 01 November 2025 14:29:38 +0000 (0:00:01.157) 0:04:53.462 ***** 2025-11-01 14:34:05.928377 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-11-01 14:34:05.928386 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-11-01 14:34:05.928396 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-11-01 14:34:05.928405 | orchestrator | 2025-11-01 14:34:05.928415 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-11-01 14:34:05.928424 | orchestrator | Saturday 01 November 2025 14:29:39 +0000 (0:00:01.013) 0:04:54.476 ***** 2025-11-01 14:34:05.928434 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-11-01 14:34:05.928443 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-11-01 14:34:05.928469 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-11-01 14:34:05.928479 | orchestrator | 2025-11-01 14:34:05.928488 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-11-01 14:34:05.928498 | orchestrator | Saturday 01 November 2025 14:29:40 +0000 (0:00:01.036) 0:04:55.512 ***** 2025-11-01 14:34:05.928507 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:34:05.928517 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:34:05.928527 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:34:05.928536 | orchestrator | 2025-11-01 14:34:05.928546 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-11-01 14:34:05.928555 | orchestrator | Saturday 01 November 2025 14:29:41 +0000 (0:00:00.524) 0:04:56.036 ***** 2025-11-01 14:34:05.928565 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:34:05.928574 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:34:05.928583 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:34:05.928593 | orchestrator | 2025-11-01 14:34:05.928603 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-11-01 14:34:05.928612 | orchestrator | Saturday 01 November 2025 14:29:42 +0000 (0:00:00.869) 0:04:56.906 ***** 2025-11-01 14:34:05.928621 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-11-01 14:34:05.928631 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-11-01 14:34:05.928641 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-11-01 14:34:05.928650 | orchestrator | 2025-11-01 14:34:05.928660 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-11-01 14:34:05.928669 | orchestrator | Saturday 01 November 2025 14:29:43 +0000 (0:00:01.242) 0:04:58.148 ***** 2025-11-01 14:34:05.928679 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-11-01 14:34:05.928688 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-11-01 14:34:05.928698 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-11-01 14:34:05.928707 | orchestrator | 2025-11-01 14:34:05.928716 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-11-01 14:34:05.928726 | orchestrator | Saturday 01 November 2025 14:29:44 +0000 (0:00:01.212) 0:04:59.361 ***** 2025-11-01 14:34:05.928735 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-11-01 14:34:05.928745 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-11-01 14:34:05.928754 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-11-01 14:34:05.928770 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-11-01 14:34:05.928780 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-11-01 14:34:05.928789 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-11-01 14:34:05.928798 | orchestrator | 2025-11-01 14:34:05.928808 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-11-01 14:34:05.928818 | orchestrator | Saturday 01 November 2025 14:29:48 +0000 (0:00:04.014) 0:05:03.375 ***** 2025-11-01 14:34:05.928827 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:34:05.928837 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:34:05.928846 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:34:05.928855 | orchestrator | 2025-11-01 14:34:05.928865 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-11-01 14:34:05.928874 | orchestrator | Saturday 01 November 2025 14:29:49 +0000 (0:00:00.557) 0:05:03.933 ***** 2025-11-01 14:34:05.928884 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:34:05.928893 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:34:05.928903 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:34:05.928912 | orchestrator | 2025-11-01 14:34:05.928922 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-11-01 14:34:05.928931 | orchestrator | Saturday 01 November 2025 14:29:49 +0000 (0:00:00.350) 0:05:04.284 ***** 2025-11-01 14:34:05.928941 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:34:05.928950 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:34:05.928960 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:34:05.928969 | orchestrator | 2025-11-01 14:34:05.928983 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-11-01 14:34:05.928993 | orchestrator | Saturday 01 November 2025 14:29:51 +0000 (0:00:01.366) 0:05:05.650 ***** 2025-11-01 14:34:05.929003 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-11-01 14:34:05.929013 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-11-01 14:34:05.929023 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-11-01 14:34:05.929033 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-11-01 14:34:05.929050 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-11-01 14:34:05.929060 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-11-01 14:34:05.929069 | orchestrator | 2025-11-01 14:34:05.929079 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-11-01 14:34:05.929089 | orchestrator | Saturday 01 November 2025 14:29:54 +0000 (0:00:03.559) 0:05:09.210 ***** 2025-11-01 14:34:05.929098 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-01 14:34:05.929108 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-01 14:34:05.929117 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-01 14:34:05.929127 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-01 14:34:05.929136 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-01 14:34:05.929145 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:34:05.929155 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:34:05.929164 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-01 14:34:05.929174 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:34:05.929183 | orchestrator | 2025-11-01 14:34:05.929193 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-11-01 14:34:05.929203 | orchestrator | Saturday 01 November 2025 14:29:58 +0000 (0:00:03.635) 0:05:12.845 ***** 2025-11-01 14:34:05.929218 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:34:05.929228 | orchestrator | 2025-11-01 14:34:05.929237 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-11-01 14:34:05.929247 | orchestrator | Saturday 01 November 2025 14:29:58 +0000 (0:00:00.150) 0:05:12.995 ***** 2025-11-01 14:34:05.929256 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:34:05.929266 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:34:05.929275 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:34:05.929284 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.929294 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.929303 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.929313 | orchestrator | 2025-11-01 14:34:05.929322 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-11-01 14:34:05.929332 | orchestrator | Saturday 01 November 2025 14:29:59 +0000 (0:00:00.646) 0:05:13.642 ***** 2025-11-01 14:34:05.929341 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-11-01 14:34:05.929351 | orchestrator | 2025-11-01 14:34:05.929360 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-11-01 14:34:05.929370 | orchestrator | Saturday 01 November 2025 14:29:59 +0000 (0:00:00.807) 0:05:14.450 ***** 2025-11-01 14:34:05.929379 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:34:05.929389 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:34:05.929398 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:34:05.929408 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.929417 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.929426 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.929436 | orchestrator | 2025-11-01 14:34:05.929458 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-11-01 14:34:05.929468 | orchestrator | Saturday 01 November 2025 14:30:00 +0000 (0:00:00.874) 0:05:15.324 ***** 2025-11-01 14:34:05.929478 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-01 14:34:05.929495 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-01 14:34:05.929510 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-01 14:34:05.929527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-01 14:34:05.929538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-01 14:34:05.929548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-01 14:34:05.929558 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-01 14:34:05.929573 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-01 14:34:05.929588 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-01 14:34:05.929604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.929614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.929624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.929634 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.929649 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.929664 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.929679 | orchestrator | 2025-11-01 14:34:05.929689 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-11-01 14:34:05.929699 | orchestrator | Saturday 01 November 2025 14:30:04 +0000 (0:00:03.841) 0:05:19.166 ***** 2025-11-01 14:34:05.929709 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-01 14:34:05.929719 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-01 14:34:05.929729 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-01 14:34:05.929739 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-01 14:34:05.929755 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-01 14:34:05.929775 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-01 14:34:05.929785 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.929796 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.929805 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.930064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-01 14:34:05.930095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-01 14:34:05.930105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-01 14:34:05.930116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.930126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.930137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.930147 | orchestrator | 2025-11-01 14:34:05.930157 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-11-01 14:34:05.930167 | orchestrator | Saturday 01 November 2025 14:30:11 +0000 (0:00:06.982) 0:05:26.149 ***** 2025-11-01 14:34:05.930176 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:34:05.930186 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:34:05.930196 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:34:05.930206 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.930215 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.930225 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.930234 | orchestrator | 2025-11-01 14:34:05.930244 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-11-01 14:34:05.930254 | orchestrator | Saturday 01 November 2025 14:30:13 +0000 (0:00:01.861) 0:05:28.011 ***** 2025-11-01 14:34:05.930264 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-11-01 14:34:05.930279 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-11-01 14:34:05.930289 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-11-01 14:34:05.930299 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-11-01 14:34:05.930315 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-11-01 14:34:05.930325 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-11-01 14:34:05.930335 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.930345 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-11-01 14:34:05.930355 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-11-01 14:34:05.930365 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.930375 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-11-01 14:34:05.930385 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.930395 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-11-01 14:34:05.930404 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-11-01 14:34:05.930414 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-11-01 14:34:05.930424 | orchestrator | 2025-11-01 14:34:05.930434 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-11-01 14:34:05.930494 | orchestrator | Saturday 01 November 2025 14:30:17 +0000 (0:00:04.175) 0:05:32.187 ***** 2025-11-01 14:34:05.930507 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:34:05.930517 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:34:05.930528 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:34:05.930538 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.930548 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.930558 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.930592 | orchestrator | 2025-11-01 14:34:05.930603 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-11-01 14:34:05.930613 | orchestrator | Saturday 01 November 2025 14:30:18 +0000 (0:00:00.662) 0:05:32.850 ***** 2025-11-01 14:34:05.930623 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-11-01 14:34:05.930634 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-11-01 14:34:05.930644 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-11-01 14:34:05.930654 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-11-01 14:34:05.930666 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-11-01 14:34:05.930677 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-11-01 14:34:05.930686 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-11-01 14:34:05.930695 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-11-01 14:34:05.930705 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-11-01 14:34:05.930714 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-11-01 14:34:05.930724 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.930739 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-11-01 14:34:05.930749 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.930759 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-11-01 14:34:05.930768 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.930777 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-11-01 14:34:05.930786 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-11-01 14:34:05.930796 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-11-01 14:34:05.930805 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-11-01 14:34:05.930910 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-11-01 14:34:05.930932 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-11-01 14:34:05.930941 | orchestrator | 2025-11-01 14:34:05.930950 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-11-01 14:34:05.930959 | orchestrator | Saturday 01 November 2025 14:30:23 +0000 (0:00:05.677) 0:05:38.528 ***** 2025-11-01 14:34:05.930968 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-11-01 14:34:05.930978 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-11-01 14:34:05.930993 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-11-01 14:34:05.931002 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-11-01 14:34:05.931012 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-11-01 14:34:05.931021 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-11-01 14:34:05.931030 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-11-01 14:34:05.931038 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-11-01 14:34:05.931045 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-11-01 14:34:05.931053 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-11-01 14:34:05.931061 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-11-01 14:34:05.931072 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-11-01 14:34:05.931080 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-11-01 14:34:05.931088 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.931096 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-11-01 14:34:05.931104 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-11-01 14:34:05.931112 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.931119 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-11-01 14:34:05.931127 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-11-01 14:34:05.931135 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-11-01 14:34:05.931142 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.931150 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-11-01 14:34:05.931164 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-11-01 14:34:05.931172 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-11-01 14:34:05.931180 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-11-01 14:34:05.931188 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-11-01 14:34:05.931195 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-11-01 14:34:05.931203 | orchestrator | 2025-11-01 14:34:05.931211 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-11-01 14:34:05.931219 | orchestrator | Saturday 01 November 2025 14:30:31 +0000 (0:00:07.368) 0:05:45.896 ***** 2025-11-01 14:34:05.931226 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:34:05.931234 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:34:05.931242 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:34:05.931250 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.931257 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.931265 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.931273 | orchestrator | 2025-11-01 14:34:05.931280 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-11-01 14:34:05.931292 | orchestrator | Saturday 01 November 2025 14:30:32 +0000 (0:00:00.877) 0:05:46.774 ***** 2025-11-01 14:34:05.931300 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:34:05.931307 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:34:05.931315 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:34:05.931323 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.931331 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.931338 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.931346 | orchestrator | 2025-11-01 14:34:05.931354 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-11-01 14:34:05.931362 | orchestrator | Saturday 01 November 2025 14:30:32 +0000 (0:00:00.684) 0:05:47.459 ***** 2025-11-01 14:34:05.931370 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.931378 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.931385 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:34:05.931393 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.931401 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:34:05.931408 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:34:05.931416 | orchestrator | 2025-11-01 14:34:05.931424 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-11-01 14:34:05.931432 | orchestrator | Saturday 01 November 2025 14:30:35 +0000 (0:00:02.298) 0:05:49.757 ***** 2025-11-01 14:34:05.931461 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-01 14:34:05.931472 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-01 14:34:05.931491 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-01 14:34:05.931500 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:34:05.931508 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-01 14:34:05.931517 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-01 14:34:05.931525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-01 14:34:05.931533 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:34:05.931546 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-01 14:34:05.931563 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-01 14:34:05.931572 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-01 14:34:05.931580 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:34:05.931588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-01 14:34:05.931597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-01 14:34:05.931605 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.931613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-01 14:34:05.931625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-01 14:34:05.931639 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.931651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-01 14:34:05.931660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-01 14:34:05.931668 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.931675 | orchestrator | 2025-11-01 14:34:05.931683 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-11-01 14:34:05.931691 | orchestrator | Saturday 01 November 2025 14:30:36 +0000 (0:00:01.658) 0:05:51.415 ***** 2025-11-01 14:34:05.931699 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-11-01 14:34:05.931707 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-11-01 14:34:05.931715 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:34:05.931723 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-11-01 14:34:05.931730 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-11-01 14:34:05.931738 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:34:05.931746 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-11-01 14:34:05.931754 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-11-01 14:34:05.931762 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:34:05.931769 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-11-01 14:34:05.931777 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-11-01 14:34:05.931785 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.931793 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-11-01 14:34:05.931800 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-11-01 14:34:05.931808 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.931816 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-11-01 14:34:05.931824 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-11-01 14:34:05.931831 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.931839 | orchestrator | 2025-11-01 14:34:05.931847 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-11-01 14:34:05.931855 | orchestrator | Saturday 01 November 2025 14:30:37 +0000 (0:00:00.957) 0:05:52.373 ***** 2025-11-01 14:34:05.931863 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-01 14:34:05.931882 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-01 14:34:05.931895 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-01 14:34:05.931904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-01 14:34:05.931913 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-01 14:34:05.931921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-01 14:34:05.931934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-01 14:34:05.931947 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-01 14:34:05.931959 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-01 14:34:05.931968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.931976 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.931984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.931992 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.932187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.932209 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:05.932217 | orchestrator | 2025-11-01 14:34:05.932225 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-11-01 14:34:05.932233 | orchestrator | Saturday 01 November 2025 14:30:40 +0000 (0:00:02.919) 0:05:55.293 ***** 2025-11-01 14:34:05.932241 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:34:05.932249 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:34:05.932257 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:34:05.932265 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.932272 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.932280 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.932288 | orchestrator | 2025-11-01 14:34:05.932296 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-11-01 14:34:05.932304 | orchestrator | Saturday 01 November 2025 14:30:41 +0000 (0:00:00.850) 0:05:56.143 ***** 2025-11-01 14:34:05.932311 | orchestrator | 2025-11-01 14:34:05.932319 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-11-01 14:34:05.932327 | orchestrator | Saturday 01 November 2025 14:30:41 +0000 (0:00:00.162) 0:05:56.306 ***** 2025-11-01 14:34:05.932335 | orchestrator | 2025-11-01 14:34:05.932342 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-11-01 14:34:05.932350 | orchestrator | Saturday 01 November 2025 14:30:41 +0000 (0:00:00.133) 0:05:56.439 ***** 2025-11-01 14:34:05.932358 | orchestrator | 2025-11-01 14:34:05.932366 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-11-01 14:34:05.932373 | orchestrator | Saturday 01 November 2025 14:30:41 +0000 (0:00:00.136) 0:05:56.576 ***** 2025-11-01 14:34:05.932381 | orchestrator | 2025-11-01 14:34:05.932389 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-11-01 14:34:05.932397 | orchestrator | Saturday 01 November 2025 14:30:42 +0000 (0:00:00.143) 0:05:56.720 ***** 2025-11-01 14:34:05.932404 | orchestrator | 2025-11-01 14:34:05.932412 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-11-01 14:34:05.932426 | orchestrator | Saturday 01 November 2025 14:30:42 +0000 (0:00:00.149) 0:05:56.870 ***** 2025-11-01 14:34:05.932434 | orchestrator | 2025-11-01 14:34:05.932441 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-11-01 14:34:05.932464 | orchestrator | Saturday 01 November 2025 14:30:42 +0000 (0:00:00.335) 0:05:57.205 ***** 2025-11-01 14:34:05.932472 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:34:05.932480 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:34:05.932487 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:05.932495 | orchestrator | 2025-11-01 14:34:05.932503 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-11-01 14:34:05.932511 | orchestrator | Saturday 01 November 2025 14:30:52 +0000 (0:00:10.341) 0:06:07.547 ***** 2025-11-01 14:34:05.932519 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:05.932527 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:34:05.932534 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:34:05.932542 | orchestrator | 2025-11-01 14:34:05.932550 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-11-01 14:34:05.932558 | orchestrator | Saturday 01 November 2025 14:31:08 +0000 (0:00:15.268) 0:06:22.816 ***** 2025-11-01 14:34:05.932566 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:34:05.932573 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:34:05.932581 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:34:05.932589 | orchestrator | 2025-11-01 14:34:05.932597 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-11-01 14:34:05.932604 | orchestrator | Saturday 01 November 2025 14:31:34 +0000 (0:00:26.720) 0:06:49.537 ***** 2025-11-01 14:34:05.932612 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:34:05.932620 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:34:05.932628 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:34:05.932635 | orchestrator | 2025-11-01 14:34:05.932643 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-11-01 14:34:05.932651 | orchestrator | Saturday 01 November 2025 14:32:11 +0000 (0:00:36.223) 0:07:25.760 ***** 2025-11-01 14:34:05.932659 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2025-11-01 14:34:05.932667 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2025-11-01 14:34:05.932675 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2025-11-01 14:34:05.932683 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:34:05.932690 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:34:05.932698 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:34:05.932706 | orchestrator | 2025-11-01 14:34:05.932718 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-11-01 14:34:05.932726 | orchestrator | Saturday 01 November 2025 14:32:17 +0000 (0:00:06.298) 0:07:32.058 ***** 2025-11-01 14:34:05.932734 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:34:05.932742 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:34:05.932750 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:34:05.932757 | orchestrator | 2025-11-01 14:34:05.932765 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-11-01 14:34:05.932773 | orchestrator | Saturday 01 November 2025 14:32:18 +0000 (0:00:00.842) 0:07:32.901 ***** 2025-11-01 14:34:05.932781 | orchestrator | changed: [testbed-node-3] 2025-11-01 14:34:05.932789 | orchestrator | changed: [testbed-node-4] 2025-11-01 14:34:05.932796 | orchestrator | changed: [testbed-node-5] 2025-11-01 14:34:05.932805 | orchestrator | 2025-11-01 14:34:05.932814 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-11-01 14:34:05.932823 | orchestrator | Saturday 01 November 2025 14:32:42 +0000 (0:00:24.474) 0:07:57.376 ***** 2025-11-01 14:34:05.932832 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:34:05.932840 | orchestrator | 2025-11-01 14:34:05.932849 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-11-01 14:34:05.932868 | orchestrator | Saturday 01 November 2025 14:32:42 +0000 (0:00:00.153) 0:07:57.529 ***** 2025-11-01 14:34:05.932877 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:34:05.932886 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:34:05.932895 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.932904 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.932913 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.932922 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-11-01 14:34:05.932931 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-11-01 14:34:05.932940 | orchestrator | 2025-11-01 14:34:05.932949 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-11-01 14:34:05.932958 | orchestrator | Saturday 01 November 2025 14:33:09 +0000 (0:00:26.199) 0:08:23.728 ***** 2025-11-01 14:34:05.932967 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:34:05.932976 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:34:05.932984 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:34:05.932993 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.933002 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.933011 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.933020 | orchestrator | 2025-11-01 14:34:05.933028 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-11-01 14:34:05.933037 | orchestrator | Saturday 01 November 2025 14:33:21 +0000 (0:00:12.331) 0:08:36.060 ***** 2025-11-01 14:34:05.933046 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:34:05.933055 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:34:05.933063 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.933072 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.933081 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.933090 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2025-11-01 14:34:05.933099 | orchestrator | 2025-11-01 14:34:05.933107 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-11-01 14:34:05.933116 | orchestrator | Saturday 01 November 2025 14:33:26 +0000 (0:00:04.858) 0:08:40.918 ***** 2025-11-01 14:34:05.933125 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-11-01 14:34:05.933134 | orchestrator | 2025-11-01 14:34:05.933143 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-11-01 14:34:05.933152 | orchestrator | Saturday 01 November 2025 14:33:40 +0000 (0:00:14.483) 0:08:55.402 ***** 2025-11-01 14:34:05.933161 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-11-01 14:34:05.933168 | orchestrator | 2025-11-01 14:34:05.933176 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-11-01 14:34:05.933184 | orchestrator | Saturday 01 November 2025 14:33:42 +0000 (0:00:01.517) 0:08:56.920 ***** 2025-11-01 14:34:05.933192 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:34:05.933200 | orchestrator | 2025-11-01 14:34:05.933207 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-11-01 14:34:05.933215 | orchestrator | Saturday 01 November 2025 14:33:43 +0000 (0:00:01.480) 0:08:58.401 ***** 2025-11-01 14:34:05.933223 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-11-01 14:34:05.933231 | orchestrator | 2025-11-01 14:34:05.933239 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-11-01 14:34:05.933246 | orchestrator | Saturday 01 November 2025 14:33:56 +0000 (0:00:12.557) 0:09:10.959 ***** 2025-11-01 14:34:05.933254 | orchestrator | ok: [testbed-node-3] 2025-11-01 14:34:05.933262 | orchestrator | ok: [testbed-node-4] 2025-11-01 14:34:05.933270 | orchestrator | ok: [testbed-node-5] 2025-11-01 14:34:05.933278 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:34:05.933286 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:34:05.933293 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:34:05.933306 | orchestrator | 2025-11-01 14:34:05.933314 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-11-01 14:34:05.933322 | orchestrator | 2025-11-01 14:34:05.933330 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-11-01 14:34:05.933338 | orchestrator | Saturday 01 November 2025 14:33:58 +0000 (0:00:01.984) 0:09:12.943 ***** 2025-11-01 14:34:05.933345 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:05.933353 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:34:05.933361 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:34:05.933369 | orchestrator | 2025-11-01 14:34:05.933376 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-11-01 14:34:05.933384 | orchestrator | 2025-11-01 14:34:05.933392 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-11-01 14:34:05.933400 | orchestrator | Saturday 01 November 2025 14:33:59 +0000 (0:00:01.232) 0:09:14.175 ***** 2025-11-01 14:34:05.933408 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.933415 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.933423 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.933431 | orchestrator | 2025-11-01 14:34:05.933442 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-11-01 14:34:05.933494 | orchestrator | 2025-11-01 14:34:05.933503 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-11-01 14:34:05.933511 | orchestrator | Saturday 01 November 2025 14:34:00 +0000 (0:00:00.631) 0:09:14.807 ***** 2025-11-01 14:34:05.933518 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-11-01 14:34:05.933526 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-11-01 14:34:05.933534 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-11-01 14:34:05.933542 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-11-01 14:34:05.933550 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-11-01 14:34:05.933558 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-11-01 14:34:05.933565 | orchestrator | skipping: [testbed-node-3] 2025-11-01 14:34:05.933573 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-11-01 14:34:05.933581 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-11-01 14:34:05.933593 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-11-01 14:34:05.933601 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-11-01 14:34:05.933609 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-11-01 14:34:05.933617 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-11-01 14:34:05.933624 | orchestrator | skipping: [testbed-node-4] 2025-11-01 14:34:05.933632 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-11-01 14:34:05.933640 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-11-01 14:34:05.933648 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-11-01 14:34:05.933655 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-11-01 14:34:05.933663 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-11-01 14:34:05.933671 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-11-01 14:34:05.933678 | orchestrator | skipping: [testbed-node-5] 2025-11-01 14:34:05.933686 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-11-01 14:34:05.933694 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-11-01 14:34:05.933701 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-11-01 14:34:05.933709 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-11-01 14:34:05.933717 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-11-01 14:34:05.933725 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-11-01 14:34:05.933733 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.933746 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-11-01 14:34:05.933754 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-11-01 14:34:05.933762 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-11-01 14:34:05.933769 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-11-01 14:34:05.933777 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-11-01 14:34:05.933785 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-11-01 14:34:05.933793 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.933800 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-11-01 14:34:05.933808 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-11-01 14:34:05.933816 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-11-01 14:34:05.933823 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-11-01 14:34:05.933831 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-11-01 14:34:05.933839 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-11-01 14:34:05.933847 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.933854 | orchestrator | 2025-11-01 14:34:05.933862 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-11-01 14:34:05.933870 | orchestrator | 2025-11-01 14:34:05.933877 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-11-01 14:34:05.933885 | orchestrator | Saturday 01 November 2025 14:34:01 +0000 (0:00:01.494) 0:09:16.301 ***** 2025-11-01 14:34:05.933893 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-11-01 14:34:05.933901 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-11-01 14:34:05.933908 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.933916 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-11-01 14:34:05.933924 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-11-01 14:34:05.933932 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.933939 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-11-01 14:34:05.933947 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-11-01 14:34:05.933955 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.933962 | orchestrator | 2025-11-01 14:34:05.933970 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-11-01 14:34:05.933978 | orchestrator | 2025-11-01 14:34:05.933986 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-11-01 14:34:05.933993 | orchestrator | Saturday 01 November 2025 14:34:02 +0000 (0:00:00.814) 0:09:17.116 ***** 2025-11-01 14:34:05.934001 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.934009 | orchestrator | 2025-11-01 14:34:05.934053 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-11-01 14:34:05.934060 | orchestrator | 2025-11-01 14:34:05.934066 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-11-01 14:34:05.934073 | orchestrator | Saturday 01 November 2025 14:34:03 +0000 (0:00:00.753) 0:09:17.870 ***** 2025-11-01 14:34:05.934080 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:05.934090 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:05.934097 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:05.934104 | orchestrator | 2025-11-01 14:34:05.934110 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:34:05.934117 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 14:34:05.934124 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-11-01 14:34:05.934131 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-11-01 14:34:05.934142 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-11-01 14:34:05.934152 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-11-01 14:34:05.934159 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-11-01 14:34:05.934166 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-11-01 14:34:05.934172 | orchestrator | 2025-11-01 14:34:05.934179 | orchestrator | 2025-11-01 14:34:05.934185 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:34:05.934192 | orchestrator | Saturday 01 November 2025 14:34:03 +0000 (0:00:00.486) 0:09:18.356 ***** 2025-11-01 14:34:05.934198 | orchestrator | =============================================================================== 2025-11-01 14:34:05.934205 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 36.22s 2025-11-01 14:34:05.934212 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 35.60s 2025-11-01 14:34:05.934218 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 26.72s 2025-11-01 14:34:05.934225 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 26.20s 2025-11-01 14:34:05.934231 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 24.47s 2025-11-01 14:34:05.934238 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 23.72s 2025-11-01 14:34:05.934244 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 21.54s 2025-11-01 14:34:05.934251 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 20.36s 2025-11-01 14:34:05.934257 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 16.98s 2025-11-01 14:34:05.934264 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.38s 2025-11-01 14:34:05.934270 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 15.27s 2025-11-01 14:34:05.934277 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.69s 2025-11-01 14:34:05.934284 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.48s 2025-11-01 14:34:05.934290 | orchestrator | nova-cell : Create cell ------------------------------------------------ 14.11s 2025-11-01 14:34:05.934297 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.56s 2025-11-01 14:34:05.934303 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 12.33s 2025-11-01 14:34:05.934310 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 10.34s 2025-11-01 14:34:05.934316 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.52s 2025-11-01 14:34:05.934323 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.25s 2025-11-01 14:34:05.934329 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.37s 2025-11-01 14:34:05.934336 | orchestrator | 2025-11-01 14:34:05 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:34:05.934342 | orchestrator | 2025-11-01 14:34:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:34:08.969191 | orchestrator | 2025-11-01 14:34:08 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:34:08.971848 | orchestrator | 2025-11-01 14:34:08 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:34:08.971957 | orchestrator | 2025-11-01 14:34:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:34:12.009939 | orchestrator | 2025-11-01 14:34:12 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:34:12.012187 | orchestrator | 2025-11-01 14:34:12 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:34:12.012220 | orchestrator | 2025-11-01 14:34:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:34:15.058208 | orchestrator | 2025-11-01 14:34:15 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:34:15.059677 | orchestrator | 2025-11-01 14:34:15 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:34:15.059767 | orchestrator | 2025-11-01 14:34:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:34:18.097180 | orchestrator | 2025-11-01 14:34:18 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:34:18.099253 | orchestrator | 2025-11-01 14:34:18 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:34:18.099344 | orchestrator | 2025-11-01 14:34:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:34:21.146920 | orchestrator | 2025-11-01 14:34:21 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:34:21.147985 | orchestrator | 2025-11-01 14:34:21 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:34:21.148014 | orchestrator | 2025-11-01 14:34:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:34:24.189544 | orchestrator | 2025-11-01 14:34:24 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:34:24.192124 | orchestrator | 2025-11-01 14:34:24 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:34:24.192153 | orchestrator | 2025-11-01 14:34:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:34:27.238242 | orchestrator | 2025-11-01 14:34:27 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:34:27.240934 | orchestrator | 2025-11-01 14:34:27 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:34:27.240965 | orchestrator | 2025-11-01 14:34:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:34:30.286857 | orchestrator | 2025-11-01 14:34:30 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:34:30.289885 | orchestrator | 2025-11-01 14:34:30 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:34:30.289917 | orchestrator | 2025-11-01 14:34:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:34:33.339193 | orchestrator | 2025-11-01 14:34:33 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:34:33.340742 | orchestrator | 2025-11-01 14:34:33 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:34:33.340769 | orchestrator | 2025-11-01 14:34:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:34:36.404813 | orchestrator | 2025-11-01 14:34:36 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:34:36.407342 | orchestrator | 2025-11-01 14:34:36 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:34:36.407379 | orchestrator | 2025-11-01 14:34:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:34:39.458960 | orchestrator | 2025-11-01 14:34:39 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:34:39.460023 | orchestrator | 2025-11-01 14:34:39 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:34:39.460428 | orchestrator | 2025-11-01 14:34:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:34:42.511306 | orchestrator | 2025-11-01 14:34:42 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:34:42.513793 | orchestrator | 2025-11-01 14:34:42 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:34:42.514123 | orchestrator | 2025-11-01 14:34:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:34:45.568920 | orchestrator | 2025-11-01 14:34:45 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:34:45.570189 | orchestrator | 2025-11-01 14:34:45 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:34:45.570220 | orchestrator | 2025-11-01 14:34:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:34:48.619172 | orchestrator | 2025-11-01 14:34:48 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state STARTED 2025-11-01 14:34:48.622235 | orchestrator | 2025-11-01 14:34:48 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:34:48.622271 | orchestrator | 2025-11-01 14:34:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:34:51.673503 | orchestrator | 2025-11-01 14:34:51 | INFO  | Task 4706d79e-9586-47b3-bb7d-ffe9079d2493 is in state SUCCESS 2025-11-01 14:34:51.675360 | orchestrator | 2025-11-01 14:34:51.675423 | orchestrator | 2025-11-01 14:34:51.675461 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 14:34:51.675473 | orchestrator | 2025-11-01 14:34:51.675483 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 14:34:51.675493 | orchestrator | Saturday 01 November 2025 14:29:23 +0000 (0:00:00.361) 0:00:00.361 ***** 2025-11-01 14:34:51.675503 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:34:51.675514 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:34:51.675524 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:34:51.675534 | orchestrator | 2025-11-01 14:34:51.675544 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 14:34:51.675553 | orchestrator | Saturday 01 November 2025 14:29:23 +0000 (0:00:00.361) 0:00:00.722 ***** 2025-11-01 14:34:51.675563 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-11-01 14:34:51.675603 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-11-01 14:34:51.675613 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-11-01 14:34:51.675622 | orchestrator | 2025-11-01 14:34:51.675632 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-11-01 14:34:51.675641 | orchestrator | 2025-11-01 14:34:51.675651 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-11-01 14:34:51.675676 | orchestrator | Saturday 01 November 2025 14:29:24 +0000 (0:00:00.488) 0:00:01.211 ***** 2025-11-01 14:34:51.675686 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:34:51.675697 | orchestrator | 2025-11-01 14:34:51.675707 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-11-01 14:34:51.675747 | orchestrator | Saturday 01 November 2025 14:29:24 +0000 (0:00:00.659) 0:00:01.871 ***** 2025-11-01 14:34:51.675758 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-11-01 14:34:51.675768 | orchestrator | 2025-11-01 14:34:51.675777 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-11-01 14:34:51.675787 | orchestrator | Saturday 01 November 2025 14:29:28 +0000 (0:00:03.825) 0:00:05.696 ***** 2025-11-01 14:34:51.675796 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-11-01 14:34:51.675806 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-11-01 14:34:51.675835 | orchestrator | 2025-11-01 14:34:51.675845 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-11-01 14:34:51.675855 | orchestrator | Saturday 01 November 2025 14:29:35 +0000 (0:00:07.326) 0:00:13.023 ***** 2025-11-01 14:34:51.675864 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-01 14:34:51.675874 | orchestrator | 2025-11-01 14:34:51.675972 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-11-01 14:34:51.675983 | orchestrator | Saturday 01 November 2025 14:29:39 +0000 (0:00:03.576) 0:00:16.599 ***** 2025-11-01 14:34:51.675994 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-01 14:34:51.676005 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-11-01 14:34:51.676016 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-11-01 14:34:51.676027 | orchestrator | 2025-11-01 14:34:51.676037 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-11-01 14:34:51.676048 | orchestrator | Saturday 01 November 2025 14:29:48 +0000 (0:00:08.754) 0:00:25.353 ***** 2025-11-01 14:34:51.676059 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-01 14:34:51.676070 | orchestrator | 2025-11-01 14:34:51.676080 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-11-01 14:34:51.676091 | orchestrator | Saturday 01 November 2025 14:29:52 +0000 (0:00:03.795) 0:00:29.149 ***** 2025-11-01 14:34:51.676102 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-11-01 14:34:51.676112 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-11-01 14:34:51.676123 | orchestrator | 2025-11-01 14:34:51.676133 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-11-01 14:34:51.676144 | orchestrator | Saturday 01 November 2025 14:29:59 +0000 (0:00:07.922) 0:00:37.071 ***** 2025-11-01 14:34:51.676154 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-11-01 14:34:51.676164 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-11-01 14:34:51.676175 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-11-01 14:34:51.676186 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-11-01 14:34:51.676197 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-11-01 14:34:51.676207 | orchestrator | 2025-11-01 14:34:51.676217 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-11-01 14:34:51.676228 | orchestrator | Saturday 01 November 2025 14:30:17 +0000 (0:00:17.275) 0:00:54.346 ***** 2025-11-01 14:34:51.676239 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:34:51.676250 | orchestrator | 2025-11-01 14:34:51.676260 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-11-01 14:34:51.676271 | orchestrator | Saturday 01 November 2025 14:30:17 +0000 (0:00:00.672) 0:00:55.019 ***** 2025-11-01 14:34:51.676282 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:51.676292 | orchestrator | 2025-11-01 14:34:51.676302 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-11-01 14:34:51.676312 | orchestrator | Saturday 01 November 2025 14:30:24 +0000 (0:00:06.230) 0:01:01.250 ***** 2025-11-01 14:34:51.676321 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:51.676331 | orchestrator | 2025-11-01 14:34:51.676340 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-11-01 14:34:51.676362 | orchestrator | Saturday 01 November 2025 14:30:29 +0000 (0:00:05.241) 0:01:06.492 ***** 2025-11-01 14:34:51.676372 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:34:51.676381 | orchestrator | 2025-11-01 14:34:51.676391 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-11-01 14:34:51.676401 | orchestrator | Saturday 01 November 2025 14:30:32 +0000 (0:00:03.528) 0:01:10.020 ***** 2025-11-01 14:34:51.676410 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-11-01 14:34:51.676420 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-11-01 14:34:51.676452 | orchestrator | 2025-11-01 14:34:51.676463 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-11-01 14:34:51.676473 | orchestrator | Saturday 01 November 2025 14:30:44 +0000 (0:00:11.722) 0:01:21.743 ***** 2025-11-01 14:34:51.676482 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-11-01 14:34:51.676520 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-11-01 14:34:51.676536 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-11-01 14:34:51.676547 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-11-01 14:34:51.676557 | orchestrator | 2025-11-01 14:34:51.676566 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-11-01 14:34:51.676576 | orchestrator | Saturday 01 November 2025 14:31:03 +0000 (0:00:18.662) 0:01:40.406 ***** 2025-11-01 14:34:51.676585 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:51.676605 | orchestrator | 2025-11-01 14:34:51.676615 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-11-01 14:34:51.676625 | orchestrator | Saturday 01 November 2025 14:31:08 +0000 (0:00:05.269) 0:01:45.675 ***** 2025-11-01 14:34:51.676634 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:51.676644 | orchestrator | 2025-11-01 14:34:51.676653 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-11-01 14:34:51.676663 | orchestrator | Saturday 01 November 2025 14:31:15 +0000 (0:00:06.484) 0:01:52.159 ***** 2025-11-01 14:34:51.676672 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:51.676682 | orchestrator | 2025-11-01 14:34:51.676691 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-11-01 14:34:51.676701 | orchestrator | Saturday 01 November 2025 14:31:15 +0000 (0:00:00.252) 0:01:52.412 ***** 2025-11-01 14:34:51.676710 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:34:51.676720 | orchestrator | 2025-11-01 14:34:51.676729 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-11-01 14:34:51.676739 | orchestrator | Saturday 01 November 2025 14:31:20 +0000 (0:00:05.032) 0:01:57.445 ***** 2025-11-01 14:34:51.676748 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:34:51.676758 | orchestrator | 2025-11-01 14:34:51.676768 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-11-01 14:34:51.676777 | orchestrator | Saturday 01 November 2025 14:31:21 +0000 (0:00:01.185) 0:01:58.631 ***** 2025-11-01 14:34:51.676786 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:34:51.676796 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:34:51.676805 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:51.676815 | orchestrator | 2025-11-01 14:34:51.676824 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-11-01 14:34:51.676834 | orchestrator | Saturday 01 November 2025 14:31:27 +0000 (0:00:05.686) 0:02:04.318 ***** 2025-11-01 14:34:51.676843 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:34:51.676853 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:51.676862 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:34:51.676871 | orchestrator | 2025-11-01 14:34:51.676881 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-11-01 14:34:51.676890 | orchestrator | Saturday 01 November 2025 14:31:33 +0000 (0:00:05.866) 0:02:10.185 ***** 2025-11-01 14:34:51.676900 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:51.676909 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:34:51.676919 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:34:51.676928 | orchestrator | 2025-11-01 14:34:51.676938 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-11-01 14:34:51.676954 | orchestrator | Saturday 01 November 2025 14:31:33 +0000 (0:00:00.869) 0:02:11.054 ***** 2025-11-01 14:34:51.676964 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:34:51.676973 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:34:51.676983 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:34:51.676992 | orchestrator | 2025-11-01 14:34:51.677002 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-11-01 14:34:51.677011 | orchestrator | Saturday 01 November 2025 14:31:36 +0000 (0:00:02.570) 0:02:13.625 ***** 2025-11-01 14:34:51.677021 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:34:51.677030 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:51.677040 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:34:51.677049 | orchestrator | 2025-11-01 14:34:51.677059 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-11-01 14:34:51.677068 | orchestrator | Saturday 01 November 2025 14:31:38 +0000 (0:00:01.800) 0:02:15.425 ***** 2025-11-01 14:34:51.677077 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:51.677087 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:34:51.677096 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:34:51.677106 | orchestrator | 2025-11-01 14:34:51.677115 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-11-01 14:34:51.677125 | orchestrator | Saturday 01 November 2025 14:31:39 +0000 (0:00:01.384) 0:02:16.809 ***** 2025-11-01 14:34:51.677134 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:34:51.677144 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:34:51.677153 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:51.677162 | orchestrator | 2025-11-01 14:34:51.677178 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-11-01 14:34:51.677188 | orchestrator | Saturday 01 November 2025 14:31:41 +0000 (0:00:02.198) 0:02:19.008 ***** 2025-11-01 14:34:51.677197 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:51.677207 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:34:51.677216 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:34:51.677226 | orchestrator | 2025-11-01 14:34:51.677235 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-11-01 14:34:51.677245 | orchestrator | Saturday 01 November 2025 14:31:43 +0000 (0:00:01.869) 0:02:20.877 ***** 2025-11-01 14:34:51.677254 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:34:51.677264 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:34:51.677273 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:34:51.677283 | orchestrator | 2025-11-01 14:34:51.677293 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-11-01 14:34:51.677302 | orchestrator | Saturday 01 November 2025 14:31:44 +0000 (0:00:00.681) 0:02:21.558 ***** 2025-11-01 14:34:51.677312 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:34:51.677321 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:34:51.677331 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:34:51.677340 | orchestrator | 2025-11-01 14:34:51.677350 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-11-01 14:34:51.677364 | orchestrator | Saturday 01 November 2025 14:31:47 +0000 (0:00:03.095) 0:02:24.653 ***** 2025-11-01 14:34:51.677374 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:34:51.677384 | orchestrator | 2025-11-01 14:34:51.677393 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-11-01 14:34:51.677403 | orchestrator | Saturday 01 November 2025 14:31:48 +0000 (0:00:00.859) 0:02:25.513 ***** 2025-11-01 14:34:51.677412 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:34:51.677422 | orchestrator | 2025-11-01 14:34:51.677431 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-11-01 14:34:51.677466 | orchestrator | Saturday 01 November 2025 14:31:52 +0000 (0:00:03.707) 0:02:29.221 ***** 2025-11-01 14:34:51.677476 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:34:51.677486 | orchestrator | 2025-11-01 14:34:51.677496 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-11-01 14:34:51.677515 | orchestrator | Saturday 01 November 2025 14:31:55 +0000 (0:00:03.647) 0:02:32.869 ***** 2025-11-01 14:34:51.677525 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-11-01 14:34:51.677535 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-11-01 14:34:51.677544 | orchestrator | 2025-11-01 14:34:51.677554 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-11-01 14:34:51.677563 | orchestrator | Saturday 01 November 2025 14:32:03 +0000 (0:00:07.306) 0:02:40.175 ***** 2025-11-01 14:34:51.677573 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:34:51.677582 | orchestrator | 2025-11-01 14:34:51.677592 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-11-01 14:34:51.677601 | orchestrator | Saturday 01 November 2025 14:32:06 +0000 (0:00:03.580) 0:02:43.756 ***** 2025-11-01 14:34:51.677611 | orchestrator | ok: [testbed-node-0] 2025-11-01 14:34:51.677620 | orchestrator | ok: [testbed-node-1] 2025-11-01 14:34:51.677630 | orchestrator | ok: [testbed-node-2] 2025-11-01 14:34:51.677639 | orchestrator | 2025-11-01 14:34:51.677649 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-11-01 14:34:51.677658 | orchestrator | Saturday 01 November 2025 14:32:07 +0000 (0:00:00.370) 0:02:44.126 ***** 2025-11-01 14:34:51.677671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 14:34:51.677691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 14:34:51.677708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 14:34:51.677725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-01 14:34:51.677736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-01 14:34:51.677746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-01 14:34:51.677757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-01 14:34:51.677769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-01 14:34:51.677785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-01 14:34:51.677801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-01 14:34:51.677818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-01 14:34:51.677828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-01 14:34:51.677839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:51.677849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:51.677859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:51.677870 | orchestrator | 2025-11-01 14:34:51.677879 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-11-01 14:34:51.677889 | orchestrator | Saturday 01 November 2025 14:32:09 +0000 (0:00:02.591) 0:02:46.717 ***** 2025-11-01 14:34:51.677899 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:51.677909 | orchestrator | 2025-11-01 14:34:51.677923 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-11-01 14:34:51.677933 | orchestrator | Saturday 01 November 2025 14:32:09 +0000 (0:00:00.144) 0:02:46.862 ***** 2025-11-01 14:34:51.677943 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:51.677952 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:51.677962 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:51.677977 | orchestrator | 2025-11-01 14:34:51.677987 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-11-01 14:34:51.677997 | orchestrator | Saturday 01 November 2025 14:32:10 +0000 (0:00:00.604) 0:02:47.467 ***** 2025-11-01 14:34:51.678012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-01 14:34:51.678067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-01 14:34:51.678078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-01 14:34:51.678089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-01 14:34:51.678099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:34:51.678109 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:51.678128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-01 14:34:51.678155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-01 14:34:51.678165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-01 14:34:51.678176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-01 14:34:51.678186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:34:51.678196 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:51.678206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-01 14:34:51.678224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-01 14:34:51.678245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-01 14:34:51.678256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-01 14:34:51.678266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:34:51.678276 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:51.678286 | orchestrator | 2025-11-01 14:34:51.678296 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-11-01 14:34:51.678305 | orchestrator | Saturday 01 November 2025 14:32:11 +0000 (0:00:00.905) 0:02:48.372 ***** 2025-11-01 14:34:51.678315 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 14:34:51.678325 | orchestrator | 2025-11-01 14:34:51.678335 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-11-01 14:34:51.678344 | orchestrator | Saturday 01 November 2025 14:32:11 +0000 (0:00:00.707) 0:02:49.079 ***** 2025-11-01 14:34:51.678354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 14:34:51.678927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 14:34:51.679063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 14:34:51.679082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-01 14:34:51.679095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-01 14:34:51.679107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-01 14:34:51.679119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-01 14:34:51.679141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-01 14:34:51.679170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-01 14:34:51.679187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-01 14:34:51.679200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-01 14:34:51.679211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-01 14:34:51.679223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:51.679234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:51.679260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:51.679273 | orchestrator | 2025-11-01 14:34:51.679286 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-11-01 14:34:51.679299 | orchestrator | Saturday 01 November 2025 14:32:17 +0000 (0:00:05.474) 0:02:54.554 ***** 2025-11-01 14:34:51.679315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-01 14:34:51.679327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-01 14:34:51.679339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-01 14:34:51.679350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-01 14:34:51.679361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:34:51.679379 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:51.679398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-01 14:34:51.679410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-01 14:34:51.679427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-01 14:34:51.679462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-01 14:34:51.679476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:34:51.679488 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:51.679508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-01 14:34:51.679527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-01 14:34:51.679556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-01 14:34:51.679583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-01 14:34:51.679602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:34:51.679620 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:51.679647 | orchestrator | 2025-11-01 14:34:51.679671 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-11-01 14:34:51.679692 | orchestrator | Saturday 01 November 2025 14:32:18 +0000 (0:00:01.204) 0:02:55.758 ***** 2025-11-01 14:34:51.679713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-01 14:34:51.679744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-01 14:34:51.679757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-01 14:34:51.679779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-01 14:34:51.679799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:34:51.679812 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:51.679823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-01 14:34:51.679834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-01 14:34:51.679852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-01 14:34:51.679864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-01 14:34:51.679883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:34:51.679900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-01 14:34:51.679918 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:51.679937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-01 14:34:51.679956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-01 14:34:51.679994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-01 14:34:51.680013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-01 14:34:51.680030 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:51.680042 | orchestrator | 2025-11-01 14:34:51.680053 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-11-01 14:34:51.680064 | orchestrator | Saturday 01 November 2025 14:32:20 +0000 (0:00:01.428) 0:02:57.186 ***** 2025-11-01 14:34:51.680083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 14:34:51.680101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 14:34:51.680113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 14:34:51.680132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-01 14:34:51.680143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-01 14:34:51.680155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-01 14:34:51.680172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-01 14:34:51.680189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-01 14:34:51.680200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-01 14:34:51.680218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-01 14:34:51.680230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-01 14:34:51.680241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-01 14:34:51.680259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:51.680271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:51.680287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:51.680298 | orchestrator | 2025-11-01 14:34:51.680316 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-11-01 14:34:51.680327 | orchestrator | Saturday 01 November 2025 14:32:26 +0000 (0:00:06.035) 0:03:03.221 ***** 2025-11-01 14:34:51.680338 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-11-01 14:34:51.680350 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-11-01 14:34:51.680361 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-11-01 14:34:51.680372 | orchestrator | 2025-11-01 14:34:51.680382 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-11-01 14:34:51.680393 | orchestrator | Saturday 01 November 2025 14:32:28 +0000 (0:00:02.672) 0:03:05.894 ***** 2025-11-01 14:34:51.680404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 14:34:51.680416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 14:34:51.680435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 14:34:51.680486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-01 14:34:51.680505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-01 14:34:51.680517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-01 14:34:51.680528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-01 14:34:51.680539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-01 14:34:51.680551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-01 14:34:51.680568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-01 14:34:51.680584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-01 14:34:51.680602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-01 14:34:51.680613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:51.680624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:51.680636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:51.680647 | orchestrator | 2025-11-01 14:34:51.680658 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-11-01 14:34:51.680669 | orchestrator | Saturday 01 November 2025 14:32:51 +0000 (0:00:22.884) 0:03:28.779 ***** 2025-11-01 14:34:51.680679 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:51.680690 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:34:51.680700 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:34:51.680711 | orchestrator | 2025-11-01 14:34:51.680722 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-11-01 14:34:51.680732 | orchestrator | Saturday 01 November 2025 14:32:53 +0000 (0:00:01.613) 0:03:30.392 ***** 2025-11-01 14:34:51.680743 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-11-01 14:34:51.680753 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-11-01 14:34:51.680769 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-11-01 14:34:51.680780 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-11-01 14:34:51.680791 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-11-01 14:34:51.680801 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-11-01 14:34:51.680818 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-11-01 14:34:51.680829 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-11-01 14:34:51.680839 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-11-01 14:34:51.680849 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-11-01 14:34:51.680860 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-11-01 14:34:51.680870 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-11-01 14:34:51.680881 | orchestrator | 2025-11-01 14:34:51.680891 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-11-01 14:34:51.680902 | orchestrator | Saturday 01 November 2025 14:32:59 +0000 (0:00:05.912) 0:03:36.304 ***** 2025-11-01 14:34:51.680917 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-11-01 14:34:51.680928 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-11-01 14:34:51.680938 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-11-01 14:34:51.680949 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-11-01 14:34:51.680959 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-11-01 14:34:51.680970 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-11-01 14:34:51.680980 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-11-01 14:34:51.680990 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-11-01 14:34:51.681001 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-11-01 14:34:51.681011 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-11-01 14:34:51.681022 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-11-01 14:34:51.681032 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-11-01 14:34:51.681043 | orchestrator | 2025-11-01 14:34:51.681053 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-11-01 14:34:51.681064 | orchestrator | Saturday 01 November 2025 14:33:05 +0000 (0:00:06.109) 0:03:42.414 ***** 2025-11-01 14:34:51.681074 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-11-01 14:34:51.681084 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-11-01 14:34:51.681095 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-11-01 14:34:51.681105 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-11-01 14:34:51.681116 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-11-01 14:34:51.681126 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-11-01 14:34:51.681137 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-11-01 14:34:51.681147 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-11-01 14:34:51.681158 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-11-01 14:34:51.681168 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-11-01 14:34:51.681178 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-11-01 14:34:51.681189 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-11-01 14:34:51.681199 | orchestrator | 2025-11-01 14:34:51.681210 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-11-01 14:34:51.681220 | orchestrator | Saturday 01 November 2025 14:33:11 +0000 (0:00:06.285) 0:03:48.700 ***** 2025-11-01 14:34:51.681231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 14:34:51.681261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 14:34:51.681278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 14:34:51.681290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-01 14:34:51.681301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-01 14:34:51.681313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-01 14:34:51.681324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-01 14:34:51.681350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-01 14:34:51.681362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-01 14:34:51.681378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-01 14:34:51.681390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-01 14:34:51.681402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-01 14:34:51.681413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:51.681491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:51.681536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-01 14:34:51.681558 | orchestrator | 2025-11-01 14:34:51.681575 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-11-01 14:34:51.681586 | orchestrator | Saturday 01 November 2025 14:33:16 +0000 (0:00:05.303) 0:03:54.003 ***** 2025-11-01 14:34:51.681597 | orchestrator | skipping: [testbed-node-0] 2025-11-01 14:34:51.681608 | orchestrator | skipping: [testbed-node-1] 2025-11-01 14:34:51.681618 | orchestrator | skipping: [testbed-node-2] 2025-11-01 14:34:51.681629 | orchestrator | 2025-11-01 14:34:51.681639 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-11-01 14:34:51.681650 | orchestrator | Saturday 01 November 2025 14:33:17 +0000 (0:00:00.696) 0:03:54.700 ***** 2025-11-01 14:34:51.681660 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:51.681671 | orchestrator | 2025-11-01 14:34:51.681681 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-11-01 14:34:51.681692 | orchestrator | Saturday 01 November 2025 14:33:20 +0000 (0:00:02.556) 0:03:57.256 ***** 2025-11-01 14:34:51.681702 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:51.681719 | orchestrator | 2025-11-01 14:34:51.681731 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-11-01 14:34:51.681741 | orchestrator | Saturday 01 November 2025 14:33:22 +0000 (0:00:02.421) 0:03:59.678 ***** 2025-11-01 14:34:51.681752 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:51.681762 | orchestrator | 2025-11-01 14:34:51.681772 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-11-01 14:34:51.681783 | orchestrator | Saturday 01 November 2025 14:33:25 +0000 (0:00:02.800) 0:04:02.478 ***** 2025-11-01 14:34:51.681793 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:51.681804 | orchestrator | 2025-11-01 14:34:51.681814 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-11-01 14:34:51.681825 | orchestrator | Saturday 01 November 2025 14:33:28 +0000 (0:00:03.219) 0:04:05.698 ***** 2025-11-01 14:34:51.681835 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:51.681846 | orchestrator | 2025-11-01 14:34:51.681856 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-11-01 14:34:51.681867 | orchestrator | Saturday 01 November 2025 14:33:51 +0000 (0:00:22.819) 0:04:28.517 ***** 2025-11-01 14:34:51.681877 | orchestrator | 2025-11-01 14:34:51.681887 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-11-01 14:34:51.681898 | orchestrator | Saturday 01 November 2025 14:33:51 +0000 (0:00:00.067) 0:04:28.585 ***** 2025-11-01 14:34:51.681908 | orchestrator | 2025-11-01 14:34:51.681927 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-11-01 14:34:51.681938 | orchestrator | Saturday 01 November 2025 14:33:51 +0000 (0:00:00.073) 0:04:28.658 ***** 2025-11-01 14:34:51.681948 | orchestrator | 2025-11-01 14:34:51.681959 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-11-01 14:34:51.681969 | orchestrator | Saturday 01 November 2025 14:33:51 +0000 (0:00:00.070) 0:04:28.729 ***** 2025-11-01 14:34:51.681980 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:51.681990 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:34:51.682001 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:34:51.682011 | orchestrator | 2025-11-01 14:34:51.682074 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-11-01 14:34:51.682086 | orchestrator | Saturday 01 November 2025 14:34:09 +0000 (0:00:17.796) 0:04:46.526 ***** 2025-11-01 14:34:51.682096 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:34:51.682107 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:51.682117 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:34:51.682128 | orchestrator | 2025-11-01 14:34:51.682139 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-11-01 14:34:51.682149 | orchestrator | Saturday 01 November 2025 14:34:21 +0000 (0:00:12.142) 0:04:58.668 ***** 2025-11-01 14:34:51.682160 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:51.682170 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:34:51.682181 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:34:51.682191 | orchestrator | 2025-11-01 14:34:51.682202 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-11-01 14:34:51.682212 | orchestrator | Saturday 01 November 2025 14:34:28 +0000 (0:00:06.572) 0:05:05.240 ***** 2025-11-01 14:34:51.682223 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:51.682233 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:34:51.682244 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:34:51.682254 | orchestrator | 2025-11-01 14:34:51.682264 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-11-01 14:34:51.682275 | orchestrator | Saturday 01 November 2025 14:34:38 +0000 (0:00:10.591) 0:05:15.832 ***** 2025-11-01 14:34:51.682285 | orchestrator | changed: [testbed-node-1] 2025-11-01 14:34:51.682296 | orchestrator | changed: [testbed-node-0] 2025-11-01 14:34:51.682306 | orchestrator | changed: [testbed-node-2] 2025-11-01 14:34:51.682317 | orchestrator | 2025-11-01 14:34:51.682327 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 14:34:51.682338 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-01 14:34:51.682349 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-01 14:34:51.682360 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-01 14:34:51.682371 | orchestrator | 2025-11-01 14:34:51.682381 | orchestrator | 2025-11-01 14:34:51.682392 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 14:34:51.682403 | orchestrator | Saturday 01 November 2025 14:34:49 +0000 (0:00:11.049) 0:05:26.881 ***** 2025-11-01 14:34:51.682420 | orchestrator | =============================================================================== 2025-11-01 14:34:51.682431 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 22.88s 2025-11-01 14:34:51.682463 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.82s 2025-11-01 14:34:51.682474 | orchestrator | octavia : Add rules for security groups -------------------------------- 18.66s 2025-11-01 14:34:51.682485 | orchestrator | octavia : Restart octavia-api container -------------------------------- 17.80s 2025-11-01 14:34:51.682496 | orchestrator | octavia : Adding octavia related roles --------------------------------- 17.28s 2025-11-01 14:34:51.682514 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 12.14s 2025-11-01 14:34:51.682524 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.72s 2025-11-01 14:34:51.682535 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 11.05s 2025-11-01 14:34:51.682546 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.59s 2025-11-01 14:34:51.682556 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.75s 2025-11-01 14:34:51.682572 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.92s 2025-11-01 14:34:51.682583 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.33s 2025-11-01 14:34:51.682594 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.31s 2025-11-01 14:34:51.682605 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 6.57s 2025-11-01 14:34:51.682615 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 6.48s 2025-11-01 14:34:51.682626 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 6.29s 2025-11-01 14:34:51.682636 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 6.23s 2025-11-01 14:34:51.682647 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 6.11s 2025-11-01 14:34:51.682658 | orchestrator | octavia : Copying over config.json files for services ------------------- 6.04s 2025-11-01 14:34:51.682668 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.91s 2025-11-01 14:34:51.682679 | orchestrator | 2025-11-01 14:34:51 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:34:51.682690 | orchestrator | 2025-11-01 14:34:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:34:54.725997 | orchestrator | 2025-11-01 14:34:54 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:34:54.726151 | orchestrator | 2025-11-01 14:34:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:34:57.769814 | orchestrator | 2025-11-01 14:34:57 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:34:57.769916 | orchestrator | 2025-11-01 14:34:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:35:00.813044 | orchestrator | 2025-11-01 14:35:00 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:35:00.813139 | orchestrator | 2025-11-01 14:35:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:35:03.860781 | orchestrator | 2025-11-01 14:35:03 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:35:03.860869 | orchestrator | 2025-11-01 14:35:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:35:06.901325 | orchestrator | 2025-11-01 14:35:06 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:35:06.901412 | orchestrator | 2025-11-01 14:35:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:35:09.953139 | orchestrator | 2025-11-01 14:35:09 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:35:09.953222 | orchestrator | 2025-11-01 14:35:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:35:13.014710 | orchestrator | 2025-11-01 14:35:13 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:35:13.014800 | orchestrator | 2025-11-01 14:35:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:35:16.063280 | orchestrator | 2025-11-01 14:35:16 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:35:16.063376 | orchestrator | 2025-11-01 14:35:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:35:19.113850 | orchestrator | 2025-11-01 14:35:19 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:35:19.114297 | orchestrator | 2025-11-01 14:35:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:35:22.161780 | orchestrator | 2025-11-01 14:35:22 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:35:22.161882 | orchestrator | 2025-11-01 14:35:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:35:25.213650 | orchestrator | 2025-11-01 14:35:25 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:35:25.213746 | orchestrator | 2025-11-01 14:35:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:35:28.260716 | orchestrator | 2025-11-01 14:35:28 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:35:28.260822 | orchestrator | 2025-11-01 14:35:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:35:31.304046 | orchestrator | 2025-11-01 14:35:31 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:35:31.304145 | orchestrator | 2025-11-01 14:35:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:35:34.346118 | orchestrator | 2025-11-01 14:35:34 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:35:34.346216 | orchestrator | 2025-11-01 14:35:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:35:37.386147 | orchestrator | 2025-11-01 14:35:37 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:35:37.386244 | orchestrator | 2025-11-01 14:35:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:35:40.431231 | orchestrator | 2025-11-01 14:35:40 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:35:40.431302 | orchestrator | 2025-11-01 14:35:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:35:43.484160 | orchestrator | 2025-11-01 14:35:43 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:35:43.484266 | orchestrator | 2025-11-01 14:35:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:35:46.527816 | orchestrator | 2025-11-01 14:35:46 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:35:46.527908 | orchestrator | 2025-11-01 14:35:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:35:49.580183 | orchestrator | 2025-11-01 14:35:49 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:35:49.580286 | orchestrator | 2025-11-01 14:35:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:35:52.629390 | orchestrator | 2025-11-01 14:35:52 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:35:52.629493 | orchestrator | 2025-11-01 14:35:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:35:55.673115 | orchestrator | 2025-11-01 14:35:55 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:35:55.673203 | orchestrator | 2025-11-01 14:35:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:35:58.720944 | orchestrator | 2025-11-01 14:35:58 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:35:58.721041 | orchestrator | 2025-11-01 14:35:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:36:01.771294 | orchestrator | 2025-11-01 14:36:01 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:36:01.771395 | orchestrator | 2025-11-01 14:36:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:36:04.815551 | orchestrator | 2025-11-01 14:36:04 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:36:04.815657 | orchestrator | 2025-11-01 14:36:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:36:07.866259 | orchestrator | 2025-11-01 14:36:07 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:36:07.866352 | orchestrator | 2025-11-01 14:36:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:36:10.902973 | orchestrator | 2025-11-01 14:36:10 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:36:10.903069 | orchestrator | 2025-11-01 14:36:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:36:13.943727 | orchestrator | 2025-11-01 14:36:13 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:36:13.943822 | orchestrator | 2025-11-01 14:36:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:36:16.980709 | orchestrator | 2025-11-01 14:36:16 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:36:16.980798 | orchestrator | 2025-11-01 14:36:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:36:20.032000 | orchestrator | 2025-11-01 14:36:20 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:36:20.032094 | orchestrator | 2025-11-01 14:36:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:36:23.081522 | orchestrator | 2025-11-01 14:36:23 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:36:23.081615 | orchestrator | 2025-11-01 14:36:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:36:26.120653 | orchestrator | 2025-11-01 14:36:26 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:36:26.120763 | orchestrator | 2025-11-01 14:36:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:36:29.162764 | orchestrator | 2025-11-01 14:36:29 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:36:29.162862 | orchestrator | 2025-11-01 14:36:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:36:32.204321 | orchestrator | 2025-11-01 14:36:32 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:36:32.204474 | orchestrator | 2025-11-01 14:36:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:36:35.244560 | orchestrator | 2025-11-01 14:36:35 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:36:35.244669 | orchestrator | 2025-11-01 14:36:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:36:38.296044 | orchestrator | 2025-11-01 14:36:38 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:36:38.296166 | orchestrator | 2025-11-01 14:36:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:36:41.344229 | orchestrator | 2025-11-01 14:36:41 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:36:41.344323 | orchestrator | 2025-11-01 14:36:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:36:44.388366 | orchestrator | 2025-11-01 14:36:44 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:36:44.388495 | orchestrator | 2025-11-01 14:36:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:36:47.437109 | orchestrator | 2025-11-01 14:36:47 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:36:47.437181 | orchestrator | 2025-11-01 14:36:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:36:50.485176 | orchestrator | 2025-11-01 14:36:50 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:36:50.485244 | orchestrator | 2025-11-01 14:36:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:36:53.530944 | orchestrator | 2025-11-01 14:36:53 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:36:53.531033 | orchestrator | 2025-11-01 14:36:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:36:56.571841 | orchestrator | 2025-11-01 14:36:56 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:36:56.571933 | orchestrator | 2025-11-01 14:36:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:36:59.613215 | orchestrator | 2025-11-01 14:36:59 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:36:59.613292 | orchestrator | 2025-11-01 14:36:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:37:02.655983 | orchestrator | 2025-11-01 14:37:02 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:37:02.656060 | orchestrator | 2025-11-01 14:37:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:37:05.699291 | orchestrator | 2025-11-01 14:37:05 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:37:05.699375 | orchestrator | 2025-11-01 14:37:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:37:08.748077 | orchestrator | 2025-11-01 14:37:08 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:37:08.748173 | orchestrator | 2025-11-01 14:37:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:37:11.796379 | orchestrator | 2025-11-01 14:37:11 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:37:11.796494 | orchestrator | 2025-11-01 14:37:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:37:14.843333 | orchestrator | 2025-11-01 14:37:14 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:37:14.843419 | orchestrator | 2025-11-01 14:37:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:37:17.886921 | orchestrator | 2025-11-01 14:37:17 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:37:17.887021 | orchestrator | 2025-11-01 14:37:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:37:20.935744 | orchestrator | 2025-11-01 14:37:20 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:37:20.935804 | orchestrator | 2025-11-01 14:37:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:37:23.979132 | orchestrator | 2025-11-01 14:37:23 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:37:23.979228 | orchestrator | 2025-11-01 14:37:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:37:27.022648 | orchestrator | 2025-11-01 14:37:27 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:37:27.022762 | orchestrator | 2025-11-01 14:37:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:37:30.065877 | orchestrator | 2025-11-01 14:37:30 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:37:30.065972 | orchestrator | 2025-11-01 14:37:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:37:33.113968 | orchestrator | 2025-11-01 14:37:33 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:37:33.114104 | orchestrator | 2025-11-01 14:37:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:37:36.154257 | orchestrator | 2025-11-01 14:37:36 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:37:36.154333 | orchestrator | 2025-11-01 14:37:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:37:39.205827 | orchestrator | 2025-11-01 14:37:39 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:37:39.205917 | orchestrator | 2025-11-01 14:37:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:37:42.251397 | orchestrator | 2025-11-01 14:37:42 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:37:42.251547 | orchestrator | 2025-11-01 14:37:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:37:45.299676 | orchestrator | 2025-11-01 14:37:45 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:37:45.299781 | orchestrator | 2025-11-01 14:37:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:37:48.350888 | orchestrator | 2025-11-01 14:37:48 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:37:48.351115 | orchestrator | 2025-11-01 14:37:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:37:51.403023 | orchestrator | 2025-11-01 14:37:51 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:37:51.403120 | orchestrator | 2025-11-01 14:37:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:37:54.461175 | orchestrator | 2025-11-01 14:37:54 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:37:54.461274 | orchestrator | 2025-11-01 14:37:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:37:57.504298 | orchestrator | 2025-11-01 14:37:57 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:37:57.504407 | orchestrator | 2025-11-01 14:37:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:38:00.541466 | orchestrator | 2025-11-01 14:38:00 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:38:00.541546 | orchestrator | 2025-11-01 14:38:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:38:03.581616 | orchestrator | 2025-11-01 14:38:03 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:38:03.581708 | orchestrator | 2025-11-01 14:38:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:38:06.622779 | orchestrator | 2025-11-01 14:38:06 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:38:06.622879 | orchestrator | 2025-11-01 14:38:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:38:09.681184 | orchestrator | 2025-11-01 14:38:09 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:38:09.681285 | orchestrator | 2025-11-01 14:38:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:38:12.725721 | orchestrator | 2025-11-01 14:38:12 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:38:12.725821 | orchestrator | 2025-11-01 14:38:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:38:15.775964 | orchestrator | 2025-11-01 14:38:15 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:38:15.776070 | orchestrator | 2025-11-01 14:38:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:38:18.821278 | orchestrator | 2025-11-01 14:38:18 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:38:18.821382 | orchestrator | 2025-11-01 14:38:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:38:21.863218 | orchestrator | 2025-11-01 14:38:21 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:38:21.863318 | orchestrator | 2025-11-01 14:38:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:38:24.912173 | orchestrator | 2025-11-01 14:38:24 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:38:24.912275 | orchestrator | 2025-11-01 14:38:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:38:27.962278 | orchestrator | 2025-11-01 14:38:27 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:38:27.962386 | orchestrator | 2025-11-01 14:38:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:38:31.008103 | orchestrator | 2025-11-01 14:38:31 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:38:31.008196 | orchestrator | 2025-11-01 14:38:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:38:34.062693 | orchestrator | 2025-11-01 14:38:34 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:38:34.062790 | orchestrator | 2025-11-01 14:38:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:38:37.107234 | orchestrator | 2025-11-01 14:38:37 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:38:37.107327 | orchestrator | 2025-11-01 14:38:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:38:40.146551 | orchestrator | 2025-11-01 14:38:40 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:38:40.146654 | orchestrator | 2025-11-01 14:38:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:38:43.194670 | orchestrator | 2025-11-01 14:38:43 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:38:43.194779 | orchestrator | 2025-11-01 14:38:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:38:46.245035 | orchestrator | 2025-11-01 14:38:46 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:38:46.245123 | orchestrator | 2025-11-01 14:38:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:38:49.296276 | orchestrator | 2025-11-01 14:38:49 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:38:49.296378 | orchestrator | 2025-11-01 14:38:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:38:52.347238 | orchestrator | 2025-11-01 14:38:52 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:38:52.347339 | orchestrator | 2025-11-01 14:38:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:38:55.390329 | orchestrator | 2025-11-01 14:38:55 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:38:55.390725 | orchestrator | 2025-11-01 14:38:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:38:58.440930 | orchestrator | 2025-11-01 14:38:58 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:38:58.441033 | orchestrator | 2025-11-01 14:38:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:39:01.483003 | orchestrator | 2025-11-01 14:39:01 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:39:01.483097 | orchestrator | 2025-11-01 14:39:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:39:04.532476 | orchestrator | 2025-11-01 14:39:04 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:39:04.532566 | orchestrator | 2025-11-01 14:39:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:39:07.579215 | orchestrator | 2025-11-01 14:39:07 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:39:07.579317 | orchestrator | 2025-11-01 14:39:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:39:10.631009 | orchestrator | 2025-11-01 14:39:10 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:39:10.631105 | orchestrator | 2025-11-01 14:39:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:39:13.673155 | orchestrator | 2025-11-01 14:39:13 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:39:13.673253 | orchestrator | 2025-11-01 14:39:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:39:16.720836 | orchestrator | 2025-11-01 14:39:16 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:39:16.720947 | orchestrator | 2025-11-01 14:39:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:39:19.771305 | orchestrator | 2025-11-01 14:39:19 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:39:19.771401 | orchestrator | 2025-11-01 14:39:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:39:22.824232 | orchestrator | 2025-11-01 14:39:22 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:39:22.824331 | orchestrator | 2025-11-01 14:39:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:39:25.878964 | orchestrator | 2025-11-01 14:39:25 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:39:25.879041 | orchestrator | 2025-11-01 14:39:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:39:28.929929 | orchestrator | 2025-11-01 14:39:28 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:39:28.930090 | orchestrator | 2025-11-01 14:39:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:39:31.981195 | orchestrator | 2025-11-01 14:39:31 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:39:31.981294 | orchestrator | 2025-11-01 14:39:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:39:35.031865 | orchestrator | 2025-11-01 14:39:35 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:39:35.031964 | orchestrator | 2025-11-01 14:39:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:39:38.076805 | orchestrator | 2025-11-01 14:39:38 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:39:38.077362 | orchestrator | 2025-11-01 14:39:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:39:41.129034 | orchestrator | 2025-11-01 14:39:41 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:39:41.129133 | orchestrator | 2025-11-01 14:39:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:39:44.177986 | orchestrator | 2025-11-01 14:39:44 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:39:44.178146 | orchestrator | 2025-11-01 14:39:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:39:47.221545 | orchestrator | 2025-11-01 14:39:47 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:39:47.221625 | orchestrator | 2025-11-01 14:39:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:39:50.259373 | orchestrator | 2025-11-01 14:39:50 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:39:50.259527 | orchestrator | 2025-11-01 14:39:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:39:53.303309 | orchestrator | 2025-11-01 14:39:53 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:39:53.303433 | orchestrator | 2025-11-01 14:39:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:39:56.351809 | orchestrator | 2025-11-01 14:39:56 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:39:56.351906 | orchestrator | 2025-11-01 14:39:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:39:59.398622 | orchestrator | 2025-11-01 14:39:59 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:39:59.398729 | orchestrator | 2025-11-01 14:39:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:40:02.443101 | orchestrator | 2025-11-01 14:40:02 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:40:02.443199 | orchestrator | 2025-11-01 14:40:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:40:05.484803 | orchestrator | 2025-11-01 14:40:05 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:40:05.484883 | orchestrator | 2025-11-01 14:40:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:40:08.532276 | orchestrator | 2025-11-01 14:40:08 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:40:08.532335 | orchestrator | 2025-11-01 14:40:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:40:11.575148 | orchestrator | 2025-11-01 14:40:11 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:40:11.575240 | orchestrator | 2025-11-01 14:40:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:40:14.622702 | orchestrator | 2025-11-01 14:40:14 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:40:14.622755 | orchestrator | 2025-11-01 14:40:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:40:17.670838 | orchestrator | 2025-11-01 14:40:17 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:40:17.670919 | orchestrator | 2025-11-01 14:40:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:40:20.720710 | orchestrator | 2025-11-01 14:40:20 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:40:20.720820 | orchestrator | 2025-11-01 14:40:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:40:23.766001 | orchestrator | 2025-11-01 14:40:23 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:40:23.766277 | orchestrator | 2025-11-01 14:40:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:40:26.821774 | orchestrator | 2025-11-01 14:40:26 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:40:26.821860 | orchestrator | 2025-11-01 14:40:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:40:29.869345 | orchestrator | 2025-11-01 14:40:29 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:40:29.869486 | orchestrator | 2025-11-01 14:40:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:40:32.904713 | orchestrator | 2025-11-01 14:40:32 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:40:32.904806 | orchestrator | 2025-11-01 14:40:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:40:35.947960 | orchestrator | 2025-11-01 14:40:35 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:40:35.948053 | orchestrator | 2025-11-01 14:40:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:40:38.999350 | orchestrator | 2025-11-01 14:40:38 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:40:38.999464 | orchestrator | 2025-11-01 14:40:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:40:42.047780 | orchestrator | 2025-11-01 14:40:42 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:40:42.047877 | orchestrator | 2025-11-01 14:40:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:40:45.094290 | orchestrator | 2025-11-01 14:40:45 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:40:45.094387 | orchestrator | 2025-11-01 14:40:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:40:48.142344 | orchestrator | 2025-11-01 14:40:48 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:40:48.142431 | orchestrator | 2025-11-01 14:40:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:40:51.194727 | orchestrator | 2025-11-01 14:40:51 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:40:51.194829 | orchestrator | 2025-11-01 14:40:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:40:54.254867 | orchestrator | 2025-11-01 14:40:54 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:40:54.254969 | orchestrator | 2025-11-01 14:40:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:40:57.312755 | orchestrator | 2025-11-01 14:40:57 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:40:57.312854 | orchestrator | 2025-11-01 14:40:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:41:00.368624 | orchestrator | 2025-11-01 14:41:00 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:41:00.368725 | orchestrator | 2025-11-01 14:41:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:41:03.421982 | orchestrator | 2025-11-01 14:41:03 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:41:03.422143 | orchestrator | 2025-11-01 14:41:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:41:06.469376 | orchestrator | 2025-11-01 14:41:06 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:41:06.469520 | orchestrator | 2025-11-01 14:41:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:41:09.517490 | orchestrator | 2025-11-01 14:41:09 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:41:09.517590 | orchestrator | 2025-11-01 14:41:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:41:12.567189 | orchestrator | 2025-11-01 14:41:12 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:41:12.567293 | orchestrator | 2025-11-01 14:41:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:41:15.614911 | orchestrator | 2025-11-01 14:41:15 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:41:15.615007 | orchestrator | 2025-11-01 14:41:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:41:18.658219 | orchestrator | 2025-11-01 14:41:18 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:41:18.658316 | orchestrator | 2025-11-01 14:41:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:41:21.705444 | orchestrator | 2025-11-01 14:41:21 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:41:21.705573 | orchestrator | 2025-11-01 14:41:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:41:24.758738 | orchestrator | 2025-11-01 14:41:24 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:41:24.758842 | orchestrator | 2025-11-01 14:41:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:41:27.809468 | orchestrator | 2025-11-01 14:41:27 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:41:27.809560 | orchestrator | 2025-11-01 14:41:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:41:30.862082 | orchestrator | 2025-11-01 14:41:30 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:41:30.862178 | orchestrator | 2025-11-01 14:41:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:41:33.913003 | orchestrator | 2025-11-01 14:41:33 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:41:33.913095 | orchestrator | 2025-11-01 14:41:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:41:36.956963 | orchestrator | 2025-11-01 14:41:36 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:41:36.957065 | orchestrator | 2025-11-01 14:41:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:41:40.003931 | orchestrator | 2025-11-01 14:41:40 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:41:40.004020 | orchestrator | 2025-11-01 14:41:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:41:43.057158 | orchestrator | 2025-11-01 14:41:43 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:41:43.057252 | orchestrator | 2025-11-01 14:41:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:41:46.103887 | orchestrator | 2025-11-01 14:41:46 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:41:46.103991 | orchestrator | 2025-11-01 14:41:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:41:49.152499 | orchestrator | 2025-11-01 14:41:49 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:41:49.152591 | orchestrator | 2025-11-01 14:41:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:41:52.203030 | orchestrator | 2025-11-01 14:41:52 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:41:52.203133 | orchestrator | 2025-11-01 14:41:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:41:55.250079 | orchestrator | 2025-11-01 14:41:55 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:41:55.250184 | orchestrator | 2025-11-01 14:41:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:41:58.304960 | orchestrator | 2025-11-01 14:41:58 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:41:58.305066 | orchestrator | 2025-11-01 14:41:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:42:01.354214 | orchestrator | 2025-11-01 14:42:01 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:42:01.354309 | orchestrator | 2025-11-01 14:42:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:42:04.406222 | orchestrator | 2025-11-01 14:42:04 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:42:04.406324 | orchestrator | 2025-11-01 14:42:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:42:07.450369 | orchestrator | 2025-11-01 14:42:07 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:42:07.450592 | orchestrator | 2025-11-01 14:42:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:42:10.499966 | orchestrator | 2025-11-01 14:42:10 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:42:10.500073 | orchestrator | 2025-11-01 14:42:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:42:13.561205 | orchestrator | 2025-11-01 14:42:13 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:42:13.561307 | orchestrator | 2025-11-01 14:42:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:42:16.602695 | orchestrator | 2025-11-01 14:42:16 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:42:16.602800 | orchestrator | 2025-11-01 14:42:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:42:19.649278 | orchestrator | 2025-11-01 14:42:19 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:42:19.649374 | orchestrator | 2025-11-01 14:42:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:42:22.698124 | orchestrator | 2025-11-01 14:42:22 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:42:22.698220 | orchestrator | 2025-11-01 14:42:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:42:25.741906 | orchestrator | 2025-11-01 14:42:25 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:42:25.742085 | orchestrator | 2025-11-01 14:42:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:42:28.792243 | orchestrator | 2025-11-01 14:42:28 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:42:28.792344 | orchestrator | 2025-11-01 14:42:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:42:31.841716 | orchestrator | 2025-11-01 14:42:31 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:42:31.841806 | orchestrator | 2025-11-01 14:42:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:42:34.888684 | orchestrator | 2025-11-01 14:42:34 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:42:34.888776 | orchestrator | 2025-11-01 14:42:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:42:37.938261 | orchestrator | 2025-11-01 14:42:37 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:42:37.938340 | orchestrator | 2025-11-01 14:42:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:42:40.990291 | orchestrator | 2025-11-01 14:42:40 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:42:40.990437 | orchestrator | 2025-11-01 14:42:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:42:44.040816 | orchestrator | 2025-11-01 14:42:44 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:42:44.040916 | orchestrator | 2025-11-01 14:42:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:42:47.089447 | orchestrator | 2025-11-01 14:42:47 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:42:47.089547 | orchestrator | 2025-11-01 14:42:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:42:50.129538 | orchestrator | 2025-11-01 14:42:50 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:42:50.129631 | orchestrator | 2025-11-01 14:42:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:42:53.165572 | orchestrator | 2025-11-01 14:42:53 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:42:53.165705 | orchestrator | 2025-11-01 14:42:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:42:56.220945 | orchestrator | 2025-11-01 14:42:56 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:42:56.221037 | orchestrator | 2025-11-01 14:42:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:42:59.266487 | orchestrator | 2025-11-01 14:42:59 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:42:59.266599 | orchestrator | 2025-11-01 14:42:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:43:02.306779 | orchestrator | 2025-11-01 14:43:02 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:43:02.306885 | orchestrator | 2025-11-01 14:43:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:43:05.351840 | orchestrator | 2025-11-01 14:43:05 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:43:05.351944 | orchestrator | 2025-11-01 14:43:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:43:08.403347 | orchestrator | 2025-11-01 14:43:08 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:43:08.403561 | orchestrator | 2025-11-01 14:43:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:43:11.441542 | orchestrator | 2025-11-01 14:43:11 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:43:11.441644 | orchestrator | 2025-11-01 14:43:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:43:14.488729 | orchestrator | 2025-11-01 14:43:14 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:43:14.488826 | orchestrator | 2025-11-01 14:43:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:43:17.533263 | orchestrator | 2025-11-01 14:43:17 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:43:17.533366 | orchestrator | 2025-11-01 14:43:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:43:20.585475 | orchestrator | 2025-11-01 14:43:20 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:43:20.585574 | orchestrator | 2025-11-01 14:43:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:43:23.639829 | orchestrator | 2025-11-01 14:43:23 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:43:23.639934 | orchestrator | 2025-11-01 14:43:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:43:26.690351 | orchestrator | 2025-11-01 14:43:26 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:43:26.690493 | orchestrator | 2025-11-01 14:43:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:43:29.734817 | orchestrator | 2025-11-01 14:43:29 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:43:29.734902 | orchestrator | 2025-11-01 14:43:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:43:32.780992 | orchestrator | 2025-11-01 14:43:32 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:43:32.781071 | orchestrator | 2025-11-01 14:43:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:43:35.826901 | orchestrator | 2025-11-01 14:43:35 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:43:35.826996 | orchestrator | 2025-11-01 14:43:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:43:38.881831 | orchestrator | 2025-11-01 14:43:38 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:43:38.881956 | orchestrator | 2025-11-01 14:43:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:43:41.926426 | orchestrator | 2025-11-01 14:43:41 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:43:41.926516 | orchestrator | 2025-11-01 14:43:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:43:44.973655 | orchestrator | 2025-11-01 14:43:44 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:43:44.973824 | orchestrator | 2025-11-01 14:43:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:43:48.031583 | orchestrator | 2025-11-01 14:43:48 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:43:48.031678 | orchestrator | 2025-11-01 14:43:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:43:51.071014 | orchestrator | 2025-11-01 14:43:51 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:43:51.071102 | orchestrator | 2025-11-01 14:43:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:43:54.121604 | orchestrator | 2025-11-01 14:43:54 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:43:54.121708 | orchestrator | 2025-11-01 14:43:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:43:57.174077 | orchestrator | 2025-11-01 14:43:57 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:43:57.174181 | orchestrator | 2025-11-01 14:43:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:44:00.222466 | orchestrator | 2025-11-01 14:44:00 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:44:00.222849 | orchestrator | 2025-11-01 14:44:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:44:03.272337 | orchestrator | 2025-11-01 14:44:03 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:44:03.272466 | orchestrator | 2025-11-01 14:44:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:44:06.321678 | orchestrator | 2025-11-01 14:44:06 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:44:06.321776 | orchestrator | 2025-11-01 14:44:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:44:09.370081 | orchestrator | 2025-11-01 14:44:09 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:44:09.370164 | orchestrator | 2025-11-01 14:44:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:44:12.415848 | orchestrator | 2025-11-01 14:44:12 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:44:12.415943 | orchestrator | 2025-11-01 14:44:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:44:15.460795 | orchestrator | 2025-11-01 14:44:15 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:44:15.460895 | orchestrator | 2025-11-01 14:44:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:44:18.504750 | orchestrator | 2025-11-01 14:44:18 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:44:18.504846 | orchestrator | 2025-11-01 14:44:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:44:21.544862 | orchestrator | 2025-11-01 14:44:21 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:44:21.544958 | orchestrator | 2025-11-01 14:44:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:44:24.594185 | orchestrator | 2025-11-01 14:44:24 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:44:24.594326 | orchestrator | 2025-11-01 14:44:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:44:27.633114 | orchestrator | 2025-11-01 14:44:27 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:44:27.633216 | orchestrator | 2025-11-01 14:44:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:44:30.674805 | orchestrator | 2025-11-01 14:44:30 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:44:30.674900 | orchestrator | 2025-11-01 14:44:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:44:33.731117 | orchestrator | 2025-11-01 14:44:33 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:44:33.731206 | orchestrator | 2025-11-01 14:44:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:44:36.782244 | orchestrator | 2025-11-01 14:44:36 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:44:36.782339 | orchestrator | 2025-11-01 14:44:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:44:39.835871 | orchestrator | 2025-11-01 14:44:39 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:44:39.835953 | orchestrator | 2025-11-01 14:44:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:44:42.881548 | orchestrator | 2025-11-01 14:44:42 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:44:42.881650 | orchestrator | 2025-11-01 14:44:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:44:45.928978 | orchestrator | 2025-11-01 14:44:45 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:44:45.929064 | orchestrator | 2025-11-01 14:44:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:44:48.974959 | orchestrator | 2025-11-01 14:44:48 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:44:48.975049 | orchestrator | 2025-11-01 14:44:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:44:52.016126 | orchestrator | 2025-11-01 14:44:52 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:44:52.016220 | orchestrator | 2025-11-01 14:44:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:44:55.066213 | orchestrator | 2025-11-01 14:44:55 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:44:55.066316 | orchestrator | 2025-11-01 14:44:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:44:58.111052 | orchestrator | 2025-11-01 14:44:58 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:44:58.111145 | orchestrator | 2025-11-01 14:44:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:45:01.160701 | orchestrator | 2025-11-01 14:45:01 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:45:01.160799 | orchestrator | 2025-11-01 14:45:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:45:04.219014 | orchestrator | 2025-11-01 14:45:04 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:45:04.219098 | orchestrator | 2025-11-01 14:45:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:45:07.269578 | orchestrator | 2025-11-01 14:45:07 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:45:07.269665 | orchestrator | 2025-11-01 14:45:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:45:10.320219 | orchestrator | 2025-11-01 14:45:10 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:45:10.320320 | orchestrator | 2025-11-01 14:45:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:45:13.372062 | orchestrator | 2025-11-01 14:45:13 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:45:13.372162 | orchestrator | 2025-11-01 14:45:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:45:16.426232 | orchestrator | 2025-11-01 14:45:16 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:45:16.426330 | orchestrator | 2025-11-01 14:45:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:45:19.476539 | orchestrator | 2025-11-01 14:45:19 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:45:19.476636 | orchestrator | 2025-11-01 14:45:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:45:22.525958 | orchestrator | 2025-11-01 14:45:22 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:45:22.526104 | orchestrator | 2025-11-01 14:45:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:45:25.566584 | orchestrator | 2025-11-01 14:45:25 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:45:25.566679 | orchestrator | 2025-11-01 14:45:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:45:28.611826 | orchestrator | 2025-11-01 14:45:28 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:45:28.611958 | orchestrator | 2025-11-01 14:45:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:45:31.651024 | orchestrator | 2025-11-01 14:45:31 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:45:31.651119 | orchestrator | 2025-11-01 14:45:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:45:34.695306 | orchestrator | 2025-11-01 14:45:34 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:45:34.695449 | orchestrator | 2025-11-01 14:45:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:45:37.740309 | orchestrator | 2025-11-01 14:45:37 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:45:37.740452 | orchestrator | 2025-11-01 14:45:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:45:40.786625 | orchestrator | 2025-11-01 14:45:40 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:45:40.786724 | orchestrator | 2025-11-01 14:45:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:45:43.837094 | orchestrator | 2025-11-01 14:45:43 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:45:43.837192 | orchestrator | 2025-11-01 14:45:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:45:46.881627 | orchestrator | 2025-11-01 14:45:46 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:45:46.881727 | orchestrator | 2025-11-01 14:45:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:45:49.933074 | orchestrator | 2025-11-01 14:45:49 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:45:49.933170 | orchestrator | 2025-11-01 14:45:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:45:52.982072 | orchestrator | 2025-11-01 14:45:52 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:45:52.982316 | orchestrator | 2025-11-01 14:45:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:45:56.025790 | orchestrator | 2025-11-01 14:45:56 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:45:56.025923 | orchestrator | 2025-11-01 14:45:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:45:59.079748 | orchestrator | 2025-11-01 14:45:59 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:45:59.079833 | orchestrator | 2025-11-01 14:45:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:46:02.118588 | orchestrator | 2025-11-01 14:46:02 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:46:02.118684 | orchestrator | 2025-11-01 14:46:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:46:05.158609 | orchestrator | 2025-11-01 14:46:05 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:46:05.158708 | orchestrator | 2025-11-01 14:46:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:46:08.200230 | orchestrator | 2025-11-01 14:46:08 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:46:08.200338 | orchestrator | 2025-11-01 14:46:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:46:11.251824 | orchestrator | 2025-11-01 14:46:11 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:46:11.251945 | orchestrator | 2025-11-01 14:46:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:46:14.296543 | orchestrator | 2025-11-01 14:46:14 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:46:14.296649 | orchestrator | 2025-11-01 14:46:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:46:17.352609 | orchestrator | 2025-11-01 14:46:17 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:46:17.352704 | orchestrator | 2025-11-01 14:46:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:46:20.404749 | orchestrator | 2025-11-01 14:46:20 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:46:20.404844 | orchestrator | 2025-11-01 14:46:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:46:23.450678 | orchestrator | 2025-11-01 14:46:23 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:46:23.450777 | orchestrator | 2025-11-01 14:46:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:46:26.490793 | orchestrator | 2025-11-01 14:46:26 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:46:26.490891 | orchestrator | 2025-11-01 14:46:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:46:29.540970 | orchestrator | 2025-11-01 14:46:29 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:46:29.541448 | orchestrator | 2025-11-01 14:46:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:46:32.590267 | orchestrator | 2025-11-01 14:46:32 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:46:32.590494 | orchestrator | 2025-11-01 14:46:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:46:35.640298 | orchestrator | 2025-11-01 14:46:35 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:46:35.640443 | orchestrator | 2025-11-01 14:46:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:46:38.690293 | orchestrator | 2025-11-01 14:46:38 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:46:38.690443 | orchestrator | 2025-11-01 14:46:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:46:41.733209 | orchestrator | 2025-11-01 14:46:41 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:46:41.733323 | orchestrator | 2025-11-01 14:46:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:46:44.779026 | orchestrator | 2025-11-01 14:46:44 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:46:44.779134 | orchestrator | 2025-11-01 14:46:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:46:47.831206 | orchestrator | 2025-11-01 14:46:47 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:46:47.831288 | orchestrator | 2025-11-01 14:46:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:46:50.876268 | orchestrator | 2025-11-01 14:46:50 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:46:50.876421 | orchestrator | 2025-11-01 14:46:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:46:53.925018 | orchestrator | 2025-11-01 14:46:53 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:46:53.925111 | orchestrator | 2025-11-01 14:46:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:46:56.968892 | orchestrator | 2025-11-01 14:46:56 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:46:56.968986 | orchestrator | 2025-11-01 14:46:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:47:00.022512 | orchestrator | 2025-11-01 14:47:00 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:47:00.022615 | orchestrator | 2025-11-01 14:47:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:47:03.074086 | orchestrator | 2025-11-01 14:47:03 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:47:03.074186 | orchestrator | 2025-11-01 14:47:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:47:06.132633 | orchestrator | 2025-11-01 14:47:06 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:47:06.132732 | orchestrator | 2025-11-01 14:47:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:47:09.173458 | orchestrator | 2025-11-01 14:47:09 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:47:09.173590 | orchestrator | 2025-11-01 14:47:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:47:12.223643 | orchestrator | 2025-11-01 14:47:12 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:47:12.223743 | orchestrator | 2025-11-01 14:47:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:47:15.265486 | orchestrator | 2025-11-01 14:47:15 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:47:15.265590 | orchestrator | 2025-11-01 14:47:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:47:18.308870 | orchestrator | 2025-11-01 14:47:18 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:47:18.308972 | orchestrator | 2025-11-01 14:47:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:47:21.345473 | orchestrator | 2025-11-01 14:47:21 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:47:21.345574 | orchestrator | 2025-11-01 14:47:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:47:24.390815 | orchestrator | 2025-11-01 14:47:24 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:47:24.390915 | orchestrator | 2025-11-01 14:47:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:47:27.451610 | orchestrator | 2025-11-01 14:47:27 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:47:27.451736 | orchestrator | 2025-11-01 14:47:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:47:30.506300 | orchestrator | 2025-11-01 14:47:30 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:47:30.506446 | orchestrator | 2025-11-01 14:47:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:47:33.553014 | orchestrator | 2025-11-01 14:47:33 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:47:33.553114 | orchestrator | 2025-11-01 14:47:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:47:36.592202 | orchestrator | 2025-11-01 14:47:36 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:47:36.592283 | orchestrator | 2025-11-01 14:47:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:47:39.639023 | orchestrator | 2025-11-01 14:47:39 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:47:39.639130 | orchestrator | 2025-11-01 14:47:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:47:42.691594 | orchestrator | 2025-11-01 14:47:42 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:47:42.691695 | orchestrator | 2025-11-01 14:47:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:47:45.738263 | orchestrator | 2025-11-01 14:47:45 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:47:45.738397 | orchestrator | 2025-11-01 14:47:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:47:48.789498 | orchestrator | 2025-11-01 14:47:48 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:47:48.789607 | orchestrator | 2025-11-01 14:47:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:47:51.836906 | orchestrator | 2025-11-01 14:47:51 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:47:51.837018 | orchestrator | 2025-11-01 14:47:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:47:54.875626 | orchestrator | 2025-11-01 14:47:54 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:47:54.875719 | orchestrator | 2025-11-01 14:47:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:47:57.918290 | orchestrator | 2025-11-01 14:47:57 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:47:57.918441 | orchestrator | 2025-11-01 14:47:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:48:00.957174 | orchestrator | 2025-11-01 14:48:00 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:48:00.957274 | orchestrator | 2025-11-01 14:48:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:48:04.007805 | orchestrator | 2025-11-01 14:48:04 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:48:04.007907 | orchestrator | 2025-11-01 14:48:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:48:07.053662 | orchestrator | 2025-11-01 14:48:07 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:48:07.053774 | orchestrator | 2025-11-01 14:48:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:48:10.096349 | orchestrator | 2025-11-01 14:48:10 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:48:10.096482 | orchestrator | 2025-11-01 14:48:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:48:13.139413 | orchestrator | 2025-11-01 14:48:13 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:48:13.139534 | orchestrator | 2025-11-01 14:48:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:48:16.180619 | orchestrator | 2025-11-01 14:48:16 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:48:16.180724 | orchestrator | 2025-11-01 14:48:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:48:19.219730 | orchestrator | 2025-11-01 14:48:19 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:48:19.219828 | orchestrator | 2025-11-01 14:48:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:48:22.269908 | orchestrator | 2025-11-01 14:48:22 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:48:22.269998 | orchestrator | 2025-11-01 14:48:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:48:25.315609 | orchestrator | 2025-11-01 14:48:25 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:48:25.315712 | orchestrator | 2025-11-01 14:48:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:48:28.360710 | orchestrator | 2025-11-01 14:48:28 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:48:28.360818 | orchestrator | 2025-11-01 14:48:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:48:31.405005 | orchestrator | 2025-11-01 14:48:31 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:48:31.405112 | orchestrator | 2025-11-01 14:48:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:48:34.457291 | orchestrator | 2025-11-01 14:48:34 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:48:34.457447 | orchestrator | 2025-11-01 14:48:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:48:37.503493 | orchestrator | 2025-11-01 14:48:37 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:48:37.503585 | orchestrator | 2025-11-01 14:48:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:48:40.543725 | orchestrator | 2025-11-01 14:48:40 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:48:40.543816 | orchestrator | 2025-11-01 14:48:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:48:43.595239 | orchestrator | 2025-11-01 14:48:43 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:48:43.595342 | orchestrator | 2025-11-01 14:48:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:48:46.636889 | orchestrator | 2025-11-01 14:48:46 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:48:46.636989 | orchestrator | 2025-11-01 14:48:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:48:49.687253 | orchestrator | 2025-11-01 14:48:49 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:48:49.687332 | orchestrator | 2025-11-01 14:48:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:48:52.741125 | orchestrator | 2025-11-01 14:48:52 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:48:52.741235 | orchestrator | 2025-11-01 14:48:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:48:55.789908 | orchestrator | 2025-11-01 14:48:55 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:48:55.790004 | orchestrator | 2025-11-01 14:48:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:48:58.836570 | orchestrator | 2025-11-01 14:48:58 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:48:58.836698 | orchestrator | 2025-11-01 14:48:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:49:01.881939 | orchestrator | 2025-11-01 14:49:01 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:49:01.882093 | orchestrator | 2025-11-01 14:49:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:49:04.927943 | orchestrator | 2025-11-01 14:49:04 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:49:04.928062 | orchestrator | 2025-11-01 14:49:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:49:07.973495 | orchestrator | 2025-11-01 14:49:07 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:49:07.973557 | orchestrator | 2025-11-01 14:49:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:49:11.022490 | orchestrator | 2025-11-01 14:49:11 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:49:11.022591 | orchestrator | 2025-11-01 14:49:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:49:14.064130 | orchestrator | 2025-11-01 14:49:14 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:49:14.064228 | orchestrator | 2025-11-01 14:49:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:49:17.114323 | orchestrator | 2025-11-01 14:49:17 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:49:17.114462 | orchestrator | 2025-11-01 14:49:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:49:20.174575 | orchestrator | 2025-11-01 14:49:20 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:49:20.174668 | orchestrator | 2025-11-01 14:49:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:49:23.216286 | orchestrator | 2025-11-01 14:49:23 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:49:23.216440 | orchestrator | 2025-11-01 14:49:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:49:26.289803 | orchestrator | 2025-11-01 14:49:26 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:49:26.289900 | orchestrator | 2025-11-01 14:49:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:49:29.337709 | orchestrator | 2025-11-01 14:49:29 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:49:29.337880 | orchestrator | 2025-11-01 14:49:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:49:32.382990 | orchestrator | 2025-11-01 14:49:32 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:49:32.383097 | orchestrator | 2025-11-01 14:49:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:49:35.425421 | orchestrator | 2025-11-01 14:49:35 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:49:35.425580 | orchestrator | 2025-11-01 14:49:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:49:38.472413 | orchestrator | 2025-11-01 14:49:38 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:49:38.472556 | orchestrator | 2025-11-01 14:49:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:49:41.510558 | orchestrator | 2025-11-01 14:49:41 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:49:41.510660 | orchestrator | 2025-11-01 14:49:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:49:44.560061 | orchestrator | 2025-11-01 14:49:44 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:49:44.560157 | orchestrator | 2025-11-01 14:49:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:49:47.595612 | orchestrator | 2025-11-01 14:49:47 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:49:47.595689 | orchestrator | 2025-11-01 14:49:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:49:50.641914 | orchestrator | 2025-11-01 14:49:50 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:49:50.642003 | orchestrator | 2025-11-01 14:49:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:49:53.695072 | orchestrator | 2025-11-01 14:49:53 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:49:53.695173 | orchestrator | 2025-11-01 14:49:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:49:56.745780 | orchestrator | 2025-11-01 14:49:56 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:49:56.745897 | orchestrator | 2025-11-01 14:49:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:49:59.799836 | orchestrator | 2025-11-01 14:49:59 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:49:59.799931 | orchestrator | 2025-11-01 14:49:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:50:02.847241 | orchestrator | 2025-11-01 14:50:02 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:50:02.847431 | orchestrator | 2025-11-01 14:50:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:50:05.897135 | orchestrator | 2025-11-01 14:50:05 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:50:05.897221 | orchestrator | 2025-11-01 14:50:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:50:08.941351 | orchestrator | 2025-11-01 14:50:08 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:50:08.941433 | orchestrator | 2025-11-01 14:50:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:50:11.991069 | orchestrator | 2025-11-01 14:50:11 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:50:11.991154 | orchestrator | 2025-11-01 14:50:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:50:15.051886 | orchestrator | 2025-11-01 14:50:15 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:50:15.051984 | orchestrator | 2025-11-01 14:50:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:50:18.093219 | orchestrator | 2025-11-01 14:50:18 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:50:18.093310 | orchestrator | 2025-11-01 14:50:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:50:21.133367 | orchestrator | 2025-11-01 14:50:21 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:50:21.133575 | orchestrator | 2025-11-01 14:50:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:50:24.174613 | orchestrator | 2025-11-01 14:50:24 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:50:24.174709 | orchestrator | 2025-11-01 14:50:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:50:27.220645 | orchestrator | 2025-11-01 14:50:27 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:50:27.220737 | orchestrator | 2025-11-01 14:50:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:50:30.263963 | orchestrator | 2025-11-01 14:50:30 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:50:30.264062 | orchestrator | 2025-11-01 14:50:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:50:33.311811 | orchestrator | 2025-11-01 14:50:33 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:50:33.311911 | orchestrator | 2025-11-01 14:50:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:50:36.356831 | orchestrator | 2025-11-01 14:50:36 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:50:36.356933 | orchestrator | 2025-11-01 14:50:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:50:39.406250 | orchestrator | 2025-11-01 14:50:39 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:50:39.406403 | orchestrator | 2025-11-01 14:50:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:50:42.452880 | orchestrator | 2025-11-01 14:50:42 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:50:42.453046 | orchestrator | 2025-11-01 14:50:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:50:45.498862 | orchestrator | 2025-11-01 14:50:45 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:50:45.498965 | orchestrator | 2025-11-01 14:50:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:50:48.543639 | orchestrator | 2025-11-01 14:50:48 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:50:48.543733 | orchestrator | 2025-11-01 14:50:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:50:51.591832 | orchestrator | 2025-11-01 14:50:51 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:50:51.591931 | orchestrator | 2025-11-01 14:50:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:50:54.634472 | orchestrator | 2025-11-01 14:50:54 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:50:54.634566 | orchestrator | 2025-11-01 14:50:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:50:57.683737 | orchestrator | 2025-11-01 14:50:57 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:50:57.683842 | orchestrator | 2025-11-01 14:50:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:51:00.733868 | orchestrator | 2025-11-01 14:51:00 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:51:00.733986 | orchestrator | 2025-11-01 14:51:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:51:03.770561 | orchestrator | 2025-11-01 14:51:03 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:51:03.770659 | orchestrator | 2025-11-01 14:51:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:51:06.817274 | orchestrator | 2025-11-01 14:51:06 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:51:06.817402 | orchestrator | 2025-11-01 14:51:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:51:09.869824 | orchestrator | 2025-11-01 14:51:09 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:51:09.869921 | orchestrator | 2025-11-01 14:51:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:51:12.920090 | orchestrator | 2025-11-01 14:51:12 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:51:12.920199 | orchestrator | 2025-11-01 14:51:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:51:15.966826 | orchestrator | 2025-11-01 14:51:15 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:51:15.966919 | orchestrator | 2025-11-01 14:51:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:51:19.023875 | orchestrator | 2025-11-01 14:51:19 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:51:19.023966 | orchestrator | 2025-11-01 14:51:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:51:22.071853 | orchestrator | 2025-11-01 14:51:22 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:51:22.071940 | orchestrator | 2025-11-01 14:51:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:51:25.114401 | orchestrator | 2025-11-01 14:51:25 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:51:25.114485 | orchestrator | 2025-11-01 14:51:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:51:28.167638 | orchestrator | 2025-11-01 14:51:28 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:51:28.167731 | orchestrator | 2025-11-01 14:51:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:51:31.210630 | orchestrator | 2025-11-01 14:51:31 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:51:31.210724 | orchestrator | 2025-11-01 14:51:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:51:34.266611 | orchestrator | 2025-11-01 14:51:34 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:51:34.266711 | orchestrator | 2025-11-01 14:51:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:51:37.307217 | orchestrator | 2025-11-01 14:51:37 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:51:37.307366 | orchestrator | 2025-11-01 14:51:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:51:40.353522 | orchestrator | 2025-11-01 14:51:40 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:51:40.353628 | orchestrator | 2025-11-01 14:51:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:51:43.405585 | orchestrator | 2025-11-01 14:51:43 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:51:43.405664 | orchestrator | 2025-11-01 14:51:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:51:46.444076 | orchestrator | 2025-11-01 14:51:46 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:51:46.444186 | orchestrator | 2025-11-01 14:51:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:51:49.502204 | orchestrator | 2025-11-01 14:51:49 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:51:49.502993 | orchestrator | 2025-11-01 14:51:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:51:52.545894 | orchestrator | 2025-11-01 14:51:52 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:51:52.545980 | orchestrator | 2025-11-01 14:51:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:51:55.588915 | orchestrator | 2025-11-01 14:51:55 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:51:55.588987 | orchestrator | 2025-11-01 14:51:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:51:58.634093 | orchestrator | 2025-11-01 14:51:58 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:51:58.634166 | orchestrator | 2025-11-01 14:51:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:52:01.687180 | orchestrator | 2025-11-01 14:52:01 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:52:01.687262 | orchestrator | 2025-11-01 14:52:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:52:04.740435 | orchestrator | 2025-11-01 14:52:04 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:52:04.740513 | orchestrator | 2025-11-01 14:52:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:52:07.799878 | orchestrator | 2025-11-01 14:52:07 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:52:07.799929 | orchestrator | 2025-11-01 14:52:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:52:10.841686 | orchestrator | 2025-11-01 14:52:10 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:52:10.841790 | orchestrator | 2025-11-01 14:52:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:52:13.897032 | orchestrator | 2025-11-01 14:52:13 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:52:13.897119 | orchestrator | 2025-11-01 14:52:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:52:16.943416 | orchestrator | 2025-11-01 14:52:16 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:52:16.943499 | orchestrator | 2025-11-01 14:52:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:52:19.991552 | orchestrator | 2025-11-01 14:52:19 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:52:19.991631 | orchestrator | 2025-11-01 14:52:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:52:23.036966 | orchestrator | 2025-11-01 14:52:23 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:52:23.037014 | orchestrator | 2025-11-01 14:52:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:52:26.077265 | orchestrator | 2025-11-01 14:52:26 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:52:26.077375 | orchestrator | 2025-11-01 14:52:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:52:29.118514 | orchestrator | 2025-11-01 14:52:29 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:52:29.118588 | orchestrator | 2025-11-01 14:52:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:52:32.169324 | orchestrator | 2025-11-01 14:52:32 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:52:32.169398 | orchestrator | 2025-11-01 14:52:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:52:35.218941 | orchestrator | 2025-11-01 14:52:35 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:52:35.219031 | orchestrator | 2025-11-01 14:52:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:52:38.269197 | orchestrator | 2025-11-01 14:52:38 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:52:38.269317 | orchestrator | 2025-11-01 14:52:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:52:41.311327 | orchestrator | 2025-11-01 14:52:41 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:52:41.311433 | orchestrator | 2025-11-01 14:52:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:52:44.353878 | orchestrator | 2025-11-01 14:52:44 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:52:44.353962 | orchestrator | 2025-11-01 14:52:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:52:47.401546 | orchestrator | 2025-11-01 14:52:47 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:52:47.401645 | orchestrator | 2025-11-01 14:52:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:52:50.441119 | orchestrator | 2025-11-01 14:52:50 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:52:50.441335 | orchestrator | 2025-11-01 14:52:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:52:53.500755 | orchestrator | 2025-11-01 14:52:53 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:52:53.500854 | orchestrator | 2025-11-01 14:52:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:52:56.545749 | orchestrator | 2025-11-01 14:52:56 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:52:56.545869 | orchestrator | 2025-11-01 14:52:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:52:59.598855 | orchestrator | 2025-11-01 14:52:59 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:52:59.598974 | orchestrator | 2025-11-01 14:52:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:53:02.648822 | orchestrator | 2025-11-01 14:53:02 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:53:02.648916 | orchestrator | 2025-11-01 14:53:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:53:05.699423 | orchestrator | 2025-11-01 14:53:05 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:53:05.699522 | orchestrator | 2025-11-01 14:53:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:53:08.752948 | orchestrator | 2025-11-01 14:53:08 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:53:08.753051 | orchestrator | 2025-11-01 14:53:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:53:11.799585 | orchestrator | 2025-11-01 14:53:11 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:53:11.799687 | orchestrator | 2025-11-01 14:53:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:53:14.853332 | orchestrator | 2025-11-01 14:53:14 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:53:14.853426 | orchestrator | 2025-11-01 14:53:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:53:17.898907 | orchestrator | 2025-11-01 14:53:17 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:53:17.899010 | orchestrator | 2025-11-01 14:53:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:53:20.941735 | orchestrator | 2025-11-01 14:53:20 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:53:20.941833 | orchestrator | 2025-11-01 14:53:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:53:24.003026 | orchestrator | 2025-11-01 14:53:24 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:53:24.003127 | orchestrator | 2025-11-01 14:53:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:53:27.056040 | orchestrator | 2025-11-01 14:53:27 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:53:27.056146 | orchestrator | 2025-11-01 14:53:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:53:30.104720 | orchestrator | 2025-11-01 14:53:30 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:53:30.104829 | orchestrator | 2025-11-01 14:53:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:53:33.156543 | orchestrator | 2025-11-01 14:53:33 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:53:33.156644 | orchestrator | 2025-11-01 14:53:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:53:36.203671 | orchestrator | 2025-11-01 14:53:36 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:53:36.203755 | orchestrator | 2025-11-01 14:53:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:53:39.252121 | orchestrator | 2025-11-01 14:53:39 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:53:39.252221 | orchestrator | 2025-11-01 14:53:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:53:42.295245 | orchestrator | 2025-11-01 14:53:42 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:53:42.295381 | orchestrator | 2025-11-01 14:53:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:53:45.345757 | orchestrator | 2025-11-01 14:53:45 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:53:45.345851 | orchestrator | 2025-11-01 14:53:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:53:48.397695 | orchestrator | 2025-11-01 14:53:48 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:53:48.397800 | orchestrator | 2025-11-01 14:53:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:53:51.444000 | orchestrator | 2025-11-01 14:53:51 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:53:51.444091 | orchestrator | 2025-11-01 14:53:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:53:54.492722 | orchestrator | 2025-11-01 14:53:54 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:53:54.492839 | orchestrator | 2025-11-01 14:53:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:53:57.539621 | orchestrator | 2025-11-01 14:53:57 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:53:57.539712 | orchestrator | 2025-11-01 14:53:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:54:00.584927 | orchestrator | 2025-11-01 14:54:00 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:54:00.585175 | orchestrator | 2025-11-01 14:54:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:54:03.635796 | orchestrator | 2025-11-01 14:54:03 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:54:03.635901 | orchestrator | 2025-11-01 14:54:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:54:06.682709 | orchestrator | 2025-11-01 14:54:06 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:54:06.682804 | orchestrator | 2025-11-01 14:54:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:54:09.730970 | orchestrator | 2025-11-01 14:54:09 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:54:09.731073 | orchestrator | 2025-11-01 14:54:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:54:12.780134 | orchestrator | 2025-11-01 14:54:12 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:54:12.780225 | orchestrator | 2025-11-01 14:54:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:54:15.829790 | orchestrator | 2025-11-01 14:54:15 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:54:15.829871 | orchestrator | 2025-11-01 14:54:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:54:18.878638 | orchestrator | 2025-11-01 14:54:18 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:54:18.878739 | orchestrator | 2025-11-01 14:54:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:54:21.926431 | orchestrator | 2025-11-01 14:54:21 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:54:21.926523 | orchestrator | 2025-11-01 14:54:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:54:24.974473 | orchestrator | 2025-11-01 14:54:24 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:54:24.974568 | orchestrator | 2025-11-01 14:54:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:54:28.026350 | orchestrator | 2025-11-01 14:54:28 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:54:28.026446 | orchestrator | 2025-11-01 14:54:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:54:31.071229 | orchestrator | 2025-11-01 14:54:31 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:54:31.071379 | orchestrator | 2025-11-01 14:54:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:54:34.120001 | orchestrator | 2025-11-01 14:54:34 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:54:34.120094 | orchestrator | 2025-11-01 14:54:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:54:37.162816 | orchestrator | 2025-11-01 14:54:37 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:54:37.162918 | orchestrator | 2025-11-01 14:54:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:54:40.215601 | orchestrator | 2025-11-01 14:54:40 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:54:40.215705 | orchestrator | 2025-11-01 14:54:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:54:43.272543 | orchestrator | 2025-11-01 14:54:43 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:54:43.272626 | orchestrator | 2025-11-01 14:54:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:54:46.315937 | orchestrator | 2025-11-01 14:54:46 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:54:46.316040 | orchestrator | 2025-11-01 14:54:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:54:49.365935 | orchestrator | 2025-11-01 14:54:49 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:54:49.366070 | orchestrator | 2025-11-01 14:54:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:54:52.414253 | orchestrator | 2025-11-01 14:54:52 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:54:52.414408 | orchestrator | 2025-11-01 14:54:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:54:55.458772 | orchestrator | 2025-11-01 14:54:55 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:54:55.458877 | orchestrator | 2025-11-01 14:54:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:54:58.508950 | orchestrator | 2025-11-01 14:54:58 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:54:58.509053 | orchestrator | 2025-11-01 14:54:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:55:01.546780 | orchestrator | 2025-11-01 14:55:01 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:55:01.546889 | orchestrator | 2025-11-01 14:55:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:55:04.592116 | orchestrator | 2025-11-01 14:55:04 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:55:04.592213 | orchestrator | 2025-11-01 14:55:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:55:07.635832 | orchestrator | 2025-11-01 14:55:07 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:55:07.635915 | orchestrator | 2025-11-01 14:55:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:55:10.679199 | orchestrator | 2025-11-01 14:55:10 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:55:10.679321 | orchestrator | 2025-11-01 14:55:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:55:13.723878 | orchestrator | 2025-11-01 14:55:13 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:55:13.723989 | orchestrator | 2025-11-01 14:55:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:55:16.770959 | orchestrator | 2025-11-01 14:55:16 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:55:16.771060 | orchestrator | 2025-11-01 14:55:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:55:19.812984 | orchestrator | 2025-11-01 14:55:19 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:55:19.813086 | orchestrator | 2025-11-01 14:55:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:55:22.855725 | orchestrator | 2025-11-01 14:55:22 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:55:22.855823 | orchestrator | 2025-11-01 14:55:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:55:25.899362 | orchestrator | 2025-11-01 14:55:25 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:55:25.899466 | orchestrator | 2025-11-01 14:55:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:55:28.948171 | orchestrator | 2025-11-01 14:55:28 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:55:28.948329 | orchestrator | 2025-11-01 14:55:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:55:31.999657 | orchestrator | 2025-11-01 14:55:31 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:55:31.999763 | orchestrator | 2025-11-01 14:55:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:55:35.058529 | orchestrator | 2025-11-01 14:55:35 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:55:35.058631 | orchestrator | 2025-11-01 14:55:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:55:38.104669 | orchestrator | 2025-11-01 14:55:38 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:55:38.104771 | orchestrator | 2025-11-01 14:55:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:55:41.166793 | orchestrator | 2025-11-01 14:55:41 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:55:41.166899 | orchestrator | 2025-11-01 14:55:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:55:44.229592 | orchestrator | 2025-11-01 14:55:44 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:55:44.229698 | orchestrator | 2025-11-01 14:55:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:55:47.275868 | orchestrator | 2025-11-01 14:55:47 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:55:47.275952 | orchestrator | 2025-11-01 14:55:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:55:50.312489 | orchestrator | 2025-11-01 14:55:50 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:55:50.312581 | orchestrator | 2025-11-01 14:55:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:55:53.361806 | orchestrator | 2025-11-01 14:55:53 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:55:53.361904 | orchestrator | 2025-11-01 14:55:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:55:56.408706 | orchestrator | 2025-11-01 14:55:56 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:55:56.408932 | orchestrator | 2025-11-01 14:55:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:55:59.456171 | orchestrator | 2025-11-01 14:55:59 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:55:59.456293 | orchestrator | 2025-11-01 14:55:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:56:02.502801 | orchestrator | 2025-11-01 14:56:02 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:56:02.502915 | orchestrator | 2025-11-01 14:56:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:56:05.546818 | orchestrator | 2025-11-01 14:56:05 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:56:05.546903 | orchestrator | 2025-11-01 14:56:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:56:08.591794 | orchestrator | 2025-11-01 14:56:08 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:56:08.591911 | orchestrator | 2025-11-01 14:56:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:56:11.638652 | orchestrator | 2025-11-01 14:56:11 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:56:11.638760 | orchestrator | 2025-11-01 14:56:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:56:14.693964 | orchestrator | 2025-11-01 14:56:14 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:56:14.694113 | orchestrator | 2025-11-01 14:56:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:56:17.759503 | orchestrator | 2025-11-01 14:56:17 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:56:17.759598 | orchestrator | 2025-11-01 14:56:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:56:20.802104 | orchestrator | 2025-11-01 14:56:20 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:56:20.802217 | orchestrator | 2025-11-01 14:56:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:56:23.847393 | orchestrator | 2025-11-01 14:56:23 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:56:23.847496 | orchestrator | 2025-11-01 14:56:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:56:26.891851 | orchestrator | 2025-11-01 14:56:26 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:56:26.891951 | orchestrator | 2025-11-01 14:56:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:56:29.942244 | orchestrator | 2025-11-01 14:56:29 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:56:29.942399 | orchestrator | 2025-11-01 14:56:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:56:32.996150 | orchestrator | 2025-11-01 14:56:32 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:56:32.996340 | orchestrator | 2025-11-01 14:56:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:56:36.052241 | orchestrator | 2025-11-01 14:56:36 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:56:36.052396 | orchestrator | 2025-11-01 14:56:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:56:39.099153 | orchestrator | 2025-11-01 14:56:39 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:56:39.099249 | orchestrator | 2025-11-01 14:56:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:56:42.149206 | orchestrator | 2025-11-01 14:56:42 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:56:42.149338 | orchestrator | 2025-11-01 14:56:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:56:45.198962 | orchestrator | 2025-11-01 14:56:45 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:56:45.199076 | orchestrator | 2025-11-01 14:56:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:56:48.256396 | orchestrator | 2025-11-01 14:56:48 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:56:48.256495 | orchestrator | 2025-11-01 14:56:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:56:51.307652 | orchestrator | 2025-11-01 14:56:51 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:56:51.307760 | orchestrator | 2025-11-01 14:56:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:56:54.350569 | orchestrator | 2025-11-01 14:56:54 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:56:54.350665 | orchestrator | 2025-11-01 14:56:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:56:57.403070 | orchestrator | 2025-11-01 14:56:57 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:56:57.403166 | orchestrator | 2025-11-01 14:56:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:57:00.448475 | orchestrator | 2025-11-01 14:57:00 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:57:00.448580 | orchestrator | 2025-11-01 14:57:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:57:03.500521 | orchestrator | 2025-11-01 14:57:03 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:57:03.500616 | orchestrator | 2025-11-01 14:57:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:57:06.543667 | orchestrator | 2025-11-01 14:57:06 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:57:06.543765 | orchestrator | 2025-11-01 14:57:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:57:09.588770 | orchestrator | 2025-11-01 14:57:09 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:57:09.588868 | orchestrator | 2025-11-01 14:57:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:57:12.650917 | orchestrator | 2025-11-01 14:57:12 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:57:12.651038 | orchestrator | 2025-11-01 14:57:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:57:15.719431 | orchestrator | 2025-11-01 14:57:15 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:57:15.719546 | orchestrator | 2025-11-01 14:57:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:57:18.790353 | orchestrator | 2025-11-01 14:57:18 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:57:18.790488 | orchestrator | 2025-11-01 14:57:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:57:21.851081 | orchestrator | 2025-11-01 14:57:21 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:57:21.851379 | orchestrator | 2025-11-01 14:57:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:57:24.901178 | orchestrator | 2025-11-01 14:57:24 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:57:24.901318 | orchestrator | 2025-11-01 14:57:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:57:27.949779 | orchestrator | 2025-11-01 14:57:27 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:57:27.949878 | orchestrator | 2025-11-01 14:57:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:57:30.997712 | orchestrator | 2025-11-01 14:57:30 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:57:30.997813 | orchestrator | 2025-11-01 14:57:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:57:34.047835 | orchestrator | 2025-11-01 14:57:34 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:57:34.047935 | orchestrator | 2025-11-01 14:57:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:57:37.098774 | orchestrator | 2025-11-01 14:57:37 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:57:37.098875 | orchestrator | 2025-11-01 14:57:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:57:40.147778 | orchestrator | 2025-11-01 14:57:40 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:57:40.147878 | orchestrator | 2025-11-01 14:57:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:57:43.204188 | orchestrator | 2025-11-01 14:57:43 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:57:43.204336 | orchestrator | 2025-11-01 14:57:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:57:46.251103 | orchestrator | 2025-11-01 14:57:46 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:57:46.251207 | orchestrator | 2025-11-01 14:57:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:57:49.299789 | orchestrator | 2025-11-01 14:57:49 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:57:49.299866 | orchestrator | 2025-11-01 14:57:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:57:52.344366 | orchestrator | 2025-11-01 14:57:52 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:57:52.344436 | orchestrator | 2025-11-01 14:57:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:57:55.393967 | orchestrator | 2025-11-01 14:57:55 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:57:55.394115 | orchestrator | 2025-11-01 14:57:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:57:58.445559 | orchestrator | 2025-11-01 14:57:58 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:57:58.445656 | orchestrator | 2025-11-01 14:57:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:58:01.501080 | orchestrator | 2025-11-01 14:58:01 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:58:01.501184 | orchestrator | 2025-11-01 14:58:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:58:04.556639 | orchestrator | 2025-11-01 14:58:04 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:58:04.556768 | orchestrator | 2025-11-01 14:58:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:58:07.602156 | orchestrator | 2025-11-01 14:58:07 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:58:07.602240 | orchestrator | 2025-11-01 14:58:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:58:10.645768 | orchestrator | 2025-11-01 14:58:10 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:58:10.645860 | orchestrator | 2025-11-01 14:58:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:58:13.694578 | orchestrator | 2025-11-01 14:58:13 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:58:13.694676 | orchestrator | 2025-11-01 14:58:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:58:16.746980 | orchestrator | 2025-11-01 14:58:16 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:58:16.747078 | orchestrator | 2025-11-01 14:58:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:58:19.796954 | orchestrator | 2025-11-01 14:58:19 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:58:19.797063 | orchestrator | 2025-11-01 14:58:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:58:22.849124 | orchestrator | 2025-11-01 14:58:22 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:58:22.849222 | orchestrator | 2025-11-01 14:58:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:58:25.893168 | orchestrator | 2025-11-01 14:58:25 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:58:25.893320 | orchestrator | 2025-11-01 14:58:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:58:28.940640 | orchestrator | 2025-11-01 14:58:28 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:58:28.940733 | orchestrator | 2025-11-01 14:58:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:58:31.987937 | orchestrator | 2025-11-01 14:58:31 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:58:31.988040 | orchestrator | 2025-11-01 14:58:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:58:35.037838 | orchestrator | 2025-11-01 14:58:35 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:58:35.037940 | orchestrator | 2025-11-01 14:58:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:58:38.083360 | orchestrator | 2025-11-01 14:58:38 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:58:38.083531 | orchestrator | 2025-11-01 14:58:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:58:41.135557 | orchestrator | 2025-11-01 14:58:41 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:58:41.135648 | orchestrator | 2025-11-01 14:58:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:58:44.186746 | orchestrator | 2025-11-01 14:58:44 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:58:44.186844 | orchestrator | 2025-11-01 14:58:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:58:47.236477 | orchestrator | 2025-11-01 14:58:47 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:58:47.236563 | orchestrator | 2025-11-01 14:58:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:58:50.284923 | orchestrator | 2025-11-01 14:58:50 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:58:50.285051 | orchestrator | 2025-11-01 14:58:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:58:53.331817 | orchestrator | 2025-11-01 14:58:53 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:58:53.331904 | orchestrator | 2025-11-01 14:58:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:58:56.383517 | orchestrator | 2025-11-01 14:58:56 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:58:56.383613 | orchestrator | 2025-11-01 14:58:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:58:59.428307 | orchestrator | 2025-11-01 14:58:59 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:58:59.428407 | orchestrator | 2025-11-01 14:58:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:59:02.472728 | orchestrator | 2025-11-01 14:59:02 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:59:02.472890 | orchestrator | 2025-11-01 14:59:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:59:05.517419 | orchestrator | 2025-11-01 14:59:05 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:59:05.517510 | orchestrator | 2025-11-01 14:59:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:59:08.562089 | orchestrator | 2025-11-01 14:59:08 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:59:08.562201 | orchestrator | 2025-11-01 14:59:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:59:11.610693 | orchestrator | 2025-11-01 14:59:11 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:59:11.610791 | orchestrator | 2025-11-01 14:59:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:59:14.660689 | orchestrator | 2025-11-01 14:59:14 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:59:14.660791 | orchestrator | 2025-11-01 14:59:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:59:17.703664 | orchestrator | 2025-11-01 14:59:17 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:59:17.703765 | orchestrator | 2025-11-01 14:59:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:59:20.748137 | orchestrator | 2025-11-01 14:59:20 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:59:20.748231 | orchestrator | 2025-11-01 14:59:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:59:23.798345 | orchestrator | 2025-11-01 14:59:23 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:59:23.798538 | orchestrator | 2025-11-01 14:59:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:59:26.855112 | orchestrator | 2025-11-01 14:59:26 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:59:26.855206 | orchestrator | 2025-11-01 14:59:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:59:29.905120 | orchestrator | 2025-11-01 14:59:29 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:59:29.905221 | orchestrator | 2025-11-01 14:59:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:59:32.948788 | orchestrator | 2025-11-01 14:59:32 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:59:32.949060 | orchestrator | 2025-11-01 14:59:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:59:35.999565 | orchestrator | 2025-11-01 14:59:35 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:59:35.999689 | orchestrator | 2025-11-01 14:59:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:59:39.046316 | orchestrator | 2025-11-01 14:59:39 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:59:39.046423 | orchestrator | 2025-11-01 14:59:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:59:42.091814 | orchestrator | 2025-11-01 14:59:42 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:59:42.091908 | orchestrator | 2025-11-01 14:59:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:59:45.132929 | orchestrator | 2025-11-01 14:59:45 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:59:45.133030 | orchestrator | 2025-11-01 14:59:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:59:48.180946 | orchestrator | 2025-11-01 14:59:48 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:59:48.181052 | orchestrator | 2025-11-01 14:59:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:59:51.227852 | orchestrator | 2025-11-01 14:59:51 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:59:51.227947 | orchestrator | 2025-11-01 14:59:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:59:54.276728 | orchestrator | 2025-11-01 14:59:54 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:59:54.276828 | orchestrator | 2025-11-01 14:59:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 14:59:57.327039 | orchestrator | 2025-11-01 14:59:57 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 14:59:57.327151 | orchestrator | 2025-11-01 14:59:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:00:00.372744 | orchestrator | 2025-11-01 15:00:00 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:00:00.372842 | orchestrator | 2025-11-01 15:00:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:00:03.419590 | orchestrator | 2025-11-01 15:00:03 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:00:03.419691 | orchestrator | 2025-11-01 15:00:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:00:06.464639 | orchestrator | 2025-11-01 15:00:06 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:00:06.464720 | orchestrator | 2025-11-01 15:00:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:00:09.515620 | orchestrator | 2025-11-01 15:00:09 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:00:09.515710 | orchestrator | 2025-11-01 15:00:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:00:12.558444 | orchestrator | 2025-11-01 15:00:12 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:00:12.558528 | orchestrator | 2025-11-01 15:00:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:00:15.604640 | orchestrator | 2025-11-01 15:00:15 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:00:15.604733 | orchestrator | 2025-11-01 15:00:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:00:18.662348 | orchestrator | 2025-11-01 15:00:18 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:00:18.662455 | orchestrator | 2025-11-01 15:00:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:00:21.714867 | orchestrator | 2025-11-01 15:00:21 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:00:21.714986 | orchestrator | 2025-11-01 15:00:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:00:24.768381 | orchestrator | 2025-11-01 15:00:24 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:00:24.768472 | orchestrator | 2025-11-01 15:00:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:00:27.814707 | orchestrator | 2025-11-01 15:00:27 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:00:27.814814 | orchestrator | 2025-11-01 15:00:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:00:30.859590 | orchestrator | 2025-11-01 15:00:30 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:00:30.859681 | orchestrator | 2025-11-01 15:00:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:00:33.911525 | orchestrator | 2025-11-01 15:00:33 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:00:33.911634 | orchestrator | 2025-11-01 15:00:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:00:36.961344 | orchestrator | 2025-11-01 15:00:36 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:00:36.961445 | orchestrator | 2025-11-01 15:00:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:00:40.032121 | orchestrator | 2025-11-01 15:00:40 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:00:40.032400 | orchestrator | 2025-11-01 15:00:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:00:43.082884 | orchestrator | 2025-11-01 15:00:43 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:00:43.082981 | orchestrator | 2025-11-01 15:00:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:00:46.130225 | orchestrator | 2025-11-01 15:00:46 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:00:46.130371 | orchestrator | 2025-11-01 15:00:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:00:49.179670 | orchestrator | 2025-11-01 15:00:49 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:00:49.179768 | orchestrator | 2025-11-01 15:00:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:00:52.230500 | orchestrator | 2025-11-01 15:00:52 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:00:52.230599 | orchestrator | 2025-11-01 15:00:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:00:55.270717 | orchestrator | 2025-11-01 15:00:55 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:00:55.270810 | orchestrator | 2025-11-01 15:00:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:00:58.319505 | orchestrator | 2025-11-01 15:00:58 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:00:58.319746 | orchestrator | 2025-11-01 15:00:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:01:01.373539 | orchestrator | 2025-11-01 15:01:01 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:01:01.373762 | orchestrator | 2025-11-01 15:01:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:01:04.417504 | orchestrator | 2025-11-01 15:01:04 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:01:04.417677 | orchestrator | 2025-11-01 15:01:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:01:07.471079 | orchestrator | 2025-11-01 15:01:07 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:01:07.471198 | orchestrator | 2025-11-01 15:01:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:01:10.522530 | orchestrator | 2025-11-01 15:01:10 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:01:10.522632 | orchestrator | 2025-11-01 15:01:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:01:13.572783 | orchestrator | 2025-11-01 15:01:13 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:01:13.572868 | orchestrator | 2025-11-01 15:01:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:01:16.611610 | orchestrator | 2025-11-01 15:01:16 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:01:16.611700 | orchestrator | 2025-11-01 15:01:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:01:19.657250 | orchestrator | 2025-11-01 15:01:19 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:01:19.657390 | orchestrator | 2025-11-01 15:01:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:01:22.702574 | orchestrator | 2025-11-01 15:01:22 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:01:22.702662 | orchestrator | 2025-11-01 15:01:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:01:25.746984 | orchestrator | 2025-11-01 15:01:25 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:01:25.747082 | orchestrator | 2025-11-01 15:01:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:01:28.797561 | orchestrator | 2025-11-01 15:01:28 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:01:28.797671 | orchestrator | 2025-11-01 15:01:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:01:31.840773 | orchestrator | 2025-11-01 15:01:31 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:01:31.840873 | orchestrator | 2025-11-01 15:01:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:01:34.884173 | orchestrator | 2025-11-01 15:01:34 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:01:34.884323 | orchestrator | 2025-11-01 15:01:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:01:37.933169 | orchestrator | 2025-11-01 15:01:37 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:01:37.933307 | orchestrator | 2025-11-01 15:01:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:01:40.981007 | orchestrator | 2025-11-01 15:01:40 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:01:40.982429 | orchestrator | 2025-11-01 15:01:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:01:44.036347 | orchestrator | 2025-11-01 15:01:44 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:01:44.036565 | orchestrator | 2025-11-01 15:01:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:01:47.091014 | orchestrator | 2025-11-01 15:01:47 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:01:47.091121 | orchestrator | 2025-11-01 15:01:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:01:50.138481 | orchestrator | 2025-11-01 15:01:50 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:01:50.138588 | orchestrator | 2025-11-01 15:01:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:01:53.190093 | orchestrator | 2025-11-01 15:01:53 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:01:53.190463 | orchestrator | 2025-11-01 15:01:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:01:56.236497 | orchestrator | 2025-11-01 15:01:56 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:01:56.236599 | orchestrator | 2025-11-01 15:01:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:01:59.284515 | orchestrator | 2025-11-01 15:01:59 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:01:59.284616 | orchestrator | 2025-11-01 15:01:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:02:02.334429 | orchestrator | 2025-11-01 15:02:02 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:02:02.334600 | orchestrator | 2025-11-01 15:02:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:02:05.382675 | orchestrator | 2025-11-01 15:02:05 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:02:05.382777 | orchestrator | 2025-11-01 15:02:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:02:08.426915 | orchestrator | 2025-11-01 15:02:08 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:02:08.427017 | orchestrator | 2025-11-01 15:02:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:02:11.467540 | orchestrator | 2025-11-01 15:02:11 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:02:11.467639 | orchestrator | 2025-11-01 15:02:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:02:14.517668 | orchestrator | 2025-11-01 15:02:14 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:02:14.517777 | orchestrator | 2025-11-01 15:02:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:02:17.567478 | orchestrator | 2025-11-01 15:02:17 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:02:17.567578 | orchestrator | 2025-11-01 15:02:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:02:20.611852 | orchestrator | 2025-11-01 15:02:20 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:02:20.611958 | orchestrator | 2025-11-01 15:02:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:02:23.658505 | orchestrator | 2025-11-01 15:02:23 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:02:23.658604 | orchestrator | 2025-11-01 15:02:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:02:26.703224 | orchestrator | 2025-11-01 15:02:26 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:02:26.703373 | orchestrator | 2025-11-01 15:02:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:02:29.751896 | orchestrator | 2025-11-01 15:02:29 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:02:29.751990 | orchestrator | 2025-11-01 15:02:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:02:32.803878 | orchestrator | 2025-11-01 15:02:32 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:02:32.803974 | orchestrator | 2025-11-01 15:02:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:02:35.854011 | orchestrator | 2025-11-01 15:02:35 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:02:35.854164 | orchestrator | 2025-11-01 15:02:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:02:38.895854 | orchestrator | 2025-11-01 15:02:38 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:02:38.895980 | orchestrator | 2025-11-01 15:02:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:02:41.943245 | orchestrator | 2025-11-01 15:02:41 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:02:41.943392 | orchestrator | 2025-11-01 15:02:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:02:44.979082 | orchestrator | 2025-11-01 15:02:44 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:02:44.979186 | orchestrator | 2025-11-01 15:02:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:02:48.030598 | orchestrator | 2025-11-01 15:02:48 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:02:48.030695 | orchestrator | 2025-11-01 15:02:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:02:51.077542 | orchestrator | 2025-11-01 15:02:51 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:02:51.077654 | orchestrator | 2025-11-01 15:02:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:02:54.120077 | orchestrator | 2025-11-01 15:02:54 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:02:54.120173 | orchestrator | 2025-11-01 15:02:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:02:57.162372 | orchestrator | 2025-11-01 15:02:57 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:02:57.162481 | orchestrator | 2025-11-01 15:02:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:03:00.219007 | orchestrator | 2025-11-01 15:03:00 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:03:00.219109 | orchestrator | 2025-11-01 15:03:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:03:03.272616 | orchestrator | 2025-11-01 15:03:03 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:03:03.272722 | orchestrator | 2025-11-01 15:03:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:03:06.323039 | orchestrator | 2025-11-01 15:03:06 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:03:06.323128 | orchestrator | 2025-11-01 15:03:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:03:09.380750 | orchestrator | 2025-11-01 15:03:09 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:03:09.381297 | orchestrator | 2025-11-01 15:03:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:03:12.449969 | orchestrator | 2025-11-01 15:03:12 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:03:12.450114 | orchestrator | 2025-11-01 15:03:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:03:15.500758 | orchestrator | 2025-11-01 15:03:15 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:03:15.500855 | orchestrator | 2025-11-01 15:03:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:03:18.550572 | orchestrator | 2025-11-01 15:03:18 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:03:18.550668 | orchestrator | 2025-11-01 15:03:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:03:21.588049 | orchestrator | 2025-11-01 15:03:21 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:03:21.588152 | orchestrator | 2025-11-01 15:03:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:03:24.640094 | orchestrator | 2025-11-01 15:03:24 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:03:24.640178 | orchestrator | 2025-11-01 15:03:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:03:27.688064 | orchestrator | 2025-11-01 15:03:27 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:03:27.688180 | orchestrator | 2025-11-01 15:03:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:03:30.732304 | orchestrator | 2025-11-01 15:03:30 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:03:30.732398 | orchestrator | 2025-11-01 15:03:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:03:33.783158 | orchestrator | 2025-11-01 15:03:33 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:03:33.783307 | orchestrator | 2025-11-01 15:03:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:03:36.839540 | orchestrator | 2025-11-01 15:03:36 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:03:36.839757 | orchestrator | 2025-11-01 15:03:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:03:39.893327 | orchestrator | 2025-11-01 15:03:39 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:03:39.893426 | orchestrator | 2025-11-01 15:03:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:03:42.941214 | orchestrator | 2025-11-01 15:03:42 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:03:42.941343 | orchestrator | 2025-11-01 15:03:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:03:45.988580 | orchestrator | 2025-11-01 15:03:45 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:03:45.988672 | orchestrator | 2025-11-01 15:03:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:03:49.047366 | orchestrator | 2025-11-01 15:03:49 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:03:49.047463 | orchestrator | 2025-11-01 15:03:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:03:52.098907 | orchestrator | 2025-11-01 15:03:52 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:03:52.098994 | orchestrator | 2025-11-01 15:03:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:03:55.145993 | orchestrator | 2025-11-01 15:03:55 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:03:55.146132 | orchestrator | 2025-11-01 15:03:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:03:58.196067 | orchestrator | 2025-11-01 15:03:58 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:03:58.196239 | orchestrator | 2025-11-01 15:03:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:04:01.243240 | orchestrator | 2025-11-01 15:04:01 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:04:01.243461 | orchestrator | 2025-11-01 15:04:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:04:04.290101 | orchestrator | 2025-11-01 15:04:04 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:04:04.290198 | orchestrator | 2025-11-01 15:04:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:04:07.338763 | orchestrator | 2025-11-01 15:04:07 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:04:07.339009 | orchestrator | 2025-11-01 15:04:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:04:10.384751 | orchestrator | 2025-11-01 15:04:10 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:04:10.384854 | orchestrator | 2025-11-01 15:04:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:04:13.431964 | orchestrator | 2025-11-01 15:04:13 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:04:13.432072 | orchestrator | 2025-11-01 15:04:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:04:16.471651 | orchestrator | 2025-11-01 15:04:16 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:04:16.471746 | orchestrator | 2025-11-01 15:04:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:04:19.525620 | orchestrator | 2025-11-01 15:04:19 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:04:19.525725 | orchestrator | 2025-11-01 15:04:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:04:22.571975 | orchestrator | 2025-11-01 15:04:22 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:04:22.572068 | orchestrator | 2025-11-01 15:04:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:04:25.619666 | orchestrator | 2025-11-01 15:04:25 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:04:25.619768 | orchestrator | 2025-11-01 15:04:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:04:28.666100 | orchestrator | 2025-11-01 15:04:28 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:04:28.666196 | orchestrator | 2025-11-01 15:04:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:04:31.713882 | orchestrator | 2025-11-01 15:04:31 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:04:31.713989 | orchestrator | 2025-11-01 15:04:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:04:34.755956 | orchestrator | 2025-11-01 15:04:34 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:04:34.756051 | orchestrator | 2025-11-01 15:04:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:04:37.807127 | orchestrator | 2025-11-01 15:04:37 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:04:37.807222 | orchestrator | 2025-11-01 15:04:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:04:40.862061 | orchestrator | 2025-11-01 15:04:40 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:04:40.862215 | orchestrator | 2025-11-01 15:04:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:04:43.917699 | orchestrator | 2025-11-01 15:04:43 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:04:43.917806 | orchestrator | 2025-11-01 15:04:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:04:46.966730 | orchestrator | 2025-11-01 15:04:46 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:04:46.966807 | orchestrator | 2025-11-01 15:04:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:04:50.014434 | orchestrator | 2025-11-01 15:04:50 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:04:50.014529 | orchestrator | 2025-11-01 15:04:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:04:53.061872 | orchestrator | 2025-11-01 15:04:53 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:04:53.062072 | orchestrator | 2025-11-01 15:04:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:04:56.107170 | orchestrator | 2025-11-01 15:04:56 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:04:56.107287 | orchestrator | 2025-11-01 15:04:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:04:59.152491 | orchestrator | 2025-11-01 15:04:59 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:04:59.152582 | orchestrator | 2025-11-01 15:04:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:05:02.201862 | orchestrator | 2025-11-01 15:05:02 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:05:02.201967 | orchestrator | 2025-11-01 15:05:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:05:05.250550 | orchestrator | 2025-11-01 15:05:05 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:05:05.250628 | orchestrator | 2025-11-01 15:05:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:05:08.307132 | orchestrator | 2025-11-01 15:05:08 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:05:08.307217 | orchestrator | 2025-11-01 15:05:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:05:11.344302 | orchestrator | 2025-11-01 15:05:11 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:05:11.344379 | orchestrator | 2025-11-01 15:05:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:05:14.398963 | orchestrator | 2025-11-01 15:05:14 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:05:14.399055 | orchestrator | 2025-11-01 15:05:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:05:17.453686 | orchestrator | 2025-11-01 15:05:17 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:05:17.453783 | orchestrator | 2025-11-01 15:05:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:05:20.512931 | orchestrator | 2025-11-01 15:05:20 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:05:20.513036 | orchestrator | 2025-11-01 15:05:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:05:23.565232 | orchestrator | 2025-11-01 15:05:23 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:05:23.565384 | orchestrator | 2025-11-01 15:05:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:05:26.621807 | orchestrator | 2025-11-01 15:05:26 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:05:26.621906 | orchestrator | 2025-11-01 15:05:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:05:29.674362 | orchestrator | 2025-11-01 15:05:29 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:05:29.674457 | orchestrator | 2025-11-01 15:05:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:05:32.722486 | orchestrator | 2025-11-01 15:05:32 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:05:32.722586 | orchestrator | 2025-11-01 15:05:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:05:35.774359 | orchestrator | 2025-11-01 15:05:35 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:05:35.774538 | orchestrator | 2025-11-01 15:05:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:05:38.815117 | orchestrator | 2025-11-01 15:05:38 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:05:38.815211 | orchestrator | 2025-11-01 15:05:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:05:41.863988 | orchestrator | 2025-11-01 15:05:41 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:05:41.864075 | orchestrator | 2025-11-01 15:05:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:05:44.905155 | orchestrator | 2025-11-01 15:05:44 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:05:44.905308 | orchestrator | 2025-11-01 15:05:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:05:47.956406 | orchestrator | 2025-11-01 15:05:47 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:05:47.956500 | orchestrator | 2025-11-01 15:05:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:05:51.004809 | orchestrator | 2025-11-01 15:05:51 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:05:51.004914 | orchestrator | 2025-11-01 15:05:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:05:54.051935 | orchestrator | 2025-11-01 15:05:54 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:05:54.052033 | orchestrator | 2025-11-01 15:05:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:05:57.097475 | orchestrator | 2025-11-01 15:05:57 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:05:57.097582 | orchestrator | 2025-11-01 15:05:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:06:00.146850 | orchestrator | 2025-11-01 15:06:00 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:06:00.146945 | orchestrator | 2025-11-01 15:06:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:06:03.200490 | orchestrator | 2025-11-01 15:06:03 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:06:03.200590 | orchestrator | 2025-11-01 15:06:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:06:06.255063 | orchestrator | 2025-11-01 15:06:06 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:06:06.255158 | orchestrator | 2025-11-01 15:06:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:06:09.310705 | orchestrator | 2025-11-01 15:06:09 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:06:09.310801 | orchestrator | 2025-11-01 15:06:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:06:12.356549 | orchestrator | 2025-11-01 15:06:12 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:06:12.356647 | orchestrator | 2025-11-01 15:06:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:06:15.402271 | orchestrator | 2025-11-01 15:06:15 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:06:15.402367 | orchestrator | 2025-11-01 15:06:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:06:18.451062 | orchestrator | 2025-11-01 15:06:18 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:06:18.451163 | orchestrator | 2025-11-01 15:06:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:06:21.499111 | orchestrator | 2025-11-01 15:06:21 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:06:21.499275 | orchestrator | 2025-11-01 15:06:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:06:24.546728 | orchestrator | 2025-11-01 15:06:24 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:06:24.546813 | orchestrator | 2025-11-01 15:06:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:06:27.595034 | orchestrator | 2025-11-01 15:06:27 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:06:27.595134 | orchestrator | 2025-11-01 15:06:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:06:30.652641 | orchestrator | 2025-11-01 15:06:30 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:06:30.652738 | orchestrator | 2025-11-01 15:06:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:06:33.701208 | orchestrator | 2025-11-01 15:06:33 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:06:33.701362 | orchestrator | 2025-11-01 15:06:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:06:36.755143 | orchestrator | 2025-11-01 15:06:36 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:06:36.755671 | orchestrator | 2025-11-01 15:06:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:06:39.804978 | orchestrator | 2025-11-01 15:06:39 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:06:39.805073 | orchestrator | 2025-11-01 15:06:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:06:42.849078 | orchestrator | 2025-11-01 15:06:42 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:06:42.849171 | orchestrator | 2025-11-01 15:06:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:06:45.897076 | orchestrator | 2025-11-01 15:06:45 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:06:45.897178 | orchestrator | 2025-11-01 15:06:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:06:48.942943 | orchestrator | 2025-11-01 15:06:48 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:06:48.943496 | orchestrator | 2025-11-01 15:06:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:06:51.991458 | orchestrator | 2025-11-01 15:06:51 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:06:51.991563 | orchestrator | 2025-11-01 15:06:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:06:55.039425 | orchestrator | 2025-11-01 15:06:55 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:06:55.039522 | orchestrator | 2025-11-01 15:06:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:06:58.089136 | orchestrator | 2025-11-01 15:06:58 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:06:58.089283 | orchestrator | 2025-11-01 15:06:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:07:01.130299 | orchestrator | 2025-11-01 15:07:01 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:07:01.130543 | orchestrator | 2025-11-01 15:07:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:07:04.177262 | orchestrator | 2025-11-01 15:07:04 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:07:04.177361 | orchestrator | 2025-11-01 15:07:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:07:07.229502 | orchestrator | 2025-11-01 15:07:07 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:07:07.229601 | orchestrator | 2025-11-01 15:07:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:07:10.277817 | orchestrator | 2025-11-01 15:07:10 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:07:10.277923 | orchestrator | 2025-11-01 15:07:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:07:13.327297 | orchestrator | 2025-11-01 15:07:13 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:07:13.327396 | orchestrator | 2025-11-01 15:07:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:07:16.377581 | orchestrator | 2025-11-01 15:07:16 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:07:16.377687 | orchestrator | 2025-11-01 15:07:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:07:19.430617 | orchestrator | 2025-11-01 15:07:19 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:07:19.430713 | orchestrator | 2025-11-01 15:07:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:07:22.476891 | orchestrator | 2025-11-01 15:07:22 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:07:22.476986 | orchestrator | 2025-11-01 15:07:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:07:25.529082 | orchestrator | 2025-11-01 15:07:25 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:07:25.529180 | orchestrator | 2025-11-01 15:07:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:07:28.571729 | orchestrator | 2025-11-01 15:07:28 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:07:28.571835 | orchestrator | 2025-11-01 15:07:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:07:31.616642 | orchestrator | 2025-11-01 15:07:31 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:07:31.616739 | orchestrator | 2025-11-01 15:07:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:07:34.668444 | orchestrator | 2025-11-01 15:07:34 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:07:34.668530 | orchestrator | 2025-11-01 15:07:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:07:37.719134 | orchestrator | 2025-11-01 15:07:37 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:07:37.720339 | orchestrator | 2025-11-01 15:07:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:07:40.767662 | orchestrator | 2025-11-01 15:07:40 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:07:40.767760 | orchestrator | 2025-11-01 15:07:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:07:43.814009 | orchestrator | 2025-11-01 15:07:43 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:07:43.814157 | orchestrator | 2025-11-01 15:07:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:07:46.854695 | orchestrator | 2025-11-01 15:07:46 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:07:46.854860 | orchestrator | 2025-11-01 15:07:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:07:49.904955 | orchestrator | 2025-11-01 15:07:49 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:07:49.905051 | orchestrator | 2025-11-01 15:07:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:07:52.961361 | orchestrator | 2025-11-01 15:07:52 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:07:52.961460 | orchestrator | 2025-11-01 15:07:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:07:56.005953 | orchestrator | 2025-11-01 15:07:56 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:07:56.006099 | orchestrator | 2025-11-01 15:07:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:07:59.043574 | orchestrator | 2025-11-01 15:07:59 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:07:59.043688 | orchestrator | 2025-11-01 15:07:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:08:02.091644 | orchestrator | 2025-11-01 15:08:02 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:08:02.091737 | orchestrator | 2025-11-01 15:08:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:08:05.134954 | orchestrator | 2025-11-01 15:08:05 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:08:05.135050 | orchestrator | 2025-11-01 15:08:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:08:08.183045 | orchestrator | 2025-11-01 15:08:08 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:08:08.183323 | orchestrator | 2025-11-01 15:08:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:08:11.230723 | orchestrator | 2025-11-01 15:08:11 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:08:11.230828 | orchestrator | 2025-11-01 15:08:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:08:14.270056 | orchestrator | 2025-11-01 15:08:14 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:08:14.270143 | orchestrator | 2025-11-01 15:08:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:08:17.319852 | orchestrator | 2025-11-01 15:08:17 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:08:17.319950 | orchestrator | 2025-11-01 15:08:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:08:20.368529 | orchestrator | 2025-11-01 15:08:20 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:08:20.368627 | orchestrator | 2025-11-01 15:08:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:08:23.413077 | orchestrator | 2025-11-01 15:08:23 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:08:23.413173 | orchestrator | 2025-11-01 15:08:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:08:26.494691 | orchestrator | 2025-11-01 15:08:26 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:08:26.494788 | orchestrator | 2025-11-01 15:08:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:08:29.542879 | orchestrator | 2025-11-01 15:08:29 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:08:29.542985 | orchestrator | 2025-11-01 15:08:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:08:32.585390 | orchestrator | 2025-11-01 15:08:32 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:08:32.585498 | orchestrator | 2025-11-01 15:08:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:08:35.637141 | orchestrator | 2025-11-01 15:08:35 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:08:35.637298 | orchestrator | 2025-11-01 15:08:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:08:38.686733 | orchestrator | 2025-11-01 15:08:38 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:08:38.686905 | orchestrator | 2025-11-01 15:08:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:08:41.725253 | orchestrator | 2025-11-01 15:08:41 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:08:41.725355 | orchestrator | 2025-11-01 15:08:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:08:44.775303 | orchestrator | 2025-11-01 15:08:44 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:08:44.775398 | orchestrator | 2025-11-01 15:08:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:08:47.818287 | orchestrator | 2025-11-01 15:08:47 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:08:47.818370 | orchestrator | 2025-11-01 15:08:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:08:50.858114 | orchestrator | 2025-11-01 15:08:50 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:08:50.858272 | orchestrator | 2025-11-01 15:08:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:08:53.910462 | orchestrator | 2025-11-01 15:08:53 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:08:53.910761 | orchestrator | 2025-11-01 15:08:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:08:56.958559 | orchestrator | 2025-11-01 15:08:56 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:08:56.958724 | orchestrator | 2025-11-01 15:08:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:09:00.003137 | orchestrator | 2025-11-01 15:09:00 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:09:00.003283 | orchestrator | 2025-11-01 15:09:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:09:03.044627 | orchestrator | 2025-11-01 15:09:03 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:09:03.044802 | orchestrator | 2025-11-01 15:09:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:09:06.092938 | orchestrator | 2025-11-01 15:09:06 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:09:06.093043 | orchestrator | 2025-11-01 15:09:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:09:09.134887 | orchestrator | 2025-11-01 15:09:09 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:09:09.134987 | orchestrator | 2025-11-01 15:09:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:09:12.179417 | orchestrator | 2025-11-01 15:09:12 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:09:12.179516 | orchestrator | 2025-11-01 15:09:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:09:15.218165 | orchestrator | 2025-11-01 15:09:15 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:09:15.218314 | orchestrator | 2025-11-01 15:09:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:09:18.267647 | orchestrator | 2025-11-01 15:09:18 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:09:18.267743 | orchestrator | 2025-11-01 15:09:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:09:21.317081 | orchestrator | 2025-11-01 15:09:21 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:09:21.317194 | orchestrator | 2025-11-01 15:09:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:09:24.362449 | orchestrator | 2025-11-01 15:09:24 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:09:24.362550 | orchestrator | 2025-11-01 15:09:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:09:27.415621 | orchestrator | 2025-11-01 15:09:27 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:09:27.415729 | orchestrator | 2025-11-01 15:09:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:09:30.465559 | orchestrator | 2025-11-01 15:09:30 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:09:30.465656 | orchestrator | 2025-11-01 15:09:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:09:33.512483 | orchestrator | 2025-11-01 15:09:33 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:09:33.512587 | orchestrator | 2025-11-01 15:09:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:09:36.560300 | orchestrator | 2025-11-01 15:09:36 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:09:36.560400 | orchestrator | 2025-11-01 15:09:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:09:39.610846 | orchestrator | 2025-11-01 15:09:39 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:09:39.610968 | orchestrator | 2025-11-01 15:09:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:09:42.659778 | orchestrator | 2025-11-01 15:09:42 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:09:42.659877 | orchestrator | 2025-11-01 15:09:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:09:45.698444 | orchestrator | 2025-11-01 15:09:45 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:09:45.698547 | orchestrator | 2025-11-01 15:09:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:09:48.752319 | orchestrator | 2025-11-01 15:09:48 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:09:48.752416 | orchestrator | 2025-11-01 15:09:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:09:51.799062 | orchestrator | 2025-11-01 15:09:51 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:09:51.799172 | orchestrator | 2025-11-01 15:09:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:09:54.850340 | orchestrator | 2025-11-01 15:09:54 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:09:54.850433 | orchestrator | 2025-11-01 15:09:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:09:57.904978 | orchestrator | 2025-11-01 15:09:57 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:09:57.905211 | orchestrator | 2025-11-01 15:09:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:10:00.956660 | orchestrator | 2025-11-01 15:10:00 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:10:00.956757 | orchestrator | 2025-11-01 15:10:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:10:04.005539 | orchestrator | 2025-11-01 15:10:04 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:10:04.005637 | orchestrator | 2025-11-01 15:10:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:10:07.047875 | orchestrator | 2025-11-01 15:10:07 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:10:07.047958 | orchestrator | 2025-11-01 15:10:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:10:10.098827 | orchestrator | 2025-11-01 15:10:10 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:10:10.098910 | orchestrator | 2025-11-01 15:10:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:10:13.154970 | orchestrator | 2025-11-01 15:10:13 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:10:13.155071 | orchestrator | 2025-11-01 15:10:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:10:16.202920 | orchestrator | 2025-11-01 15:10:16 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:10:16.203149 | orchestrator | 2025-11-01 15:10:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:10:19.252494 | orchestrator | 2025-11-01 15:10:19 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:10:19.252594 | orchestrator | 2025-11-01 15:10:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:10:22.296353 | orchestrator | 2025-11-01 15:10:22 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:10:22.296459 | orchestrator | 2025-11-01 15:10:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:10:25.341466 | orchestrator | 2025-11-01 15:10:25 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:10:25.341582 | orchestrator | 2025-11-01 15:10:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:10:28.388511 | orchestrator | 2025-11-01 15:10:28 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:10:28.388602 | orchestrator | 2025-11-01 15:10:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:10:31.435494 | orchestrator | 2025-11-01 15:10:31 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:10:31.435622 | orchestrator | 2025-11-01 15:10:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:10:34.482570 | orchestrator | 2025-11-01 15:10:34 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:10:34.482670 | orchestrator | 2025-11-01 15:10:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:10:37.525701 | orchestrator | 2025-11-01 15:10:37 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:10:37.525804 | orchestrator | 2025-11-01 15:10:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:10:40.568455 | orchestrator | 2025-11-01 15:10:40 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:10:40.568545 | orchestrator | 2025-11-01 15:10:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:10:43.618720 | orchestrator | 2025-11-01 15:10:43 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:10:43.618812 | orchestrator | 2025-11-01 15:10:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:10:46.665674 | orchestrator | 2025-11-01 15:10:46 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:10:46.665772 | orchestrator | 2025-11-01 15:10:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:10:49.710714 | orchestrator | 2025-11-01 15:10:49 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:10:49.710811 | orchestrator | 2025-11-01 15:10:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:10:52.760813 | orchestrator | 2025-11-01 15:10:52 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:10:52.760909 | orchestrator | 2025-11-01 15:10:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:10:55.813149 | orchestrator | 2025-11-01 15:10:55 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:10:55.813241 | orchestrator | 2025-11-01 15:10:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:10:58.862702 | orchestrator | 2025-11-01 15:10:58 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:10:58.862823 | orchestrator | 2025-11-01 15:10:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:11:01.907984 | orchestrator | 2025-11-01 15:11:01 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:11:01.908205 | orchestrator | 2025-11-01 15:11:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:11:04.960587 | orchestrator | 2025-11-01 15:11:04 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:11:04.960684 | orchestrator | 2025-11-01 15:11:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:11:08.017644 | orchestrator | 2025-11-01 15:11:08 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:11:08.017738 | orchestrator | 2025-11-01 15:11:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:11:11.061099 | orchestrator | 2025-11-01 15:11:11 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:11:11.061189 | orchestrator | 2025-11-01 15:11:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:11:14.113779 | orchestrator | 2025-11-01 15:11:14 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:11:14.113915 | orchestrator | 2025-11-01 15:11:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:11:17.155408 | orchestrator | 2025-11-01 15:11:17 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:11:17.155504 | orchestrator | 2025-11-01 15:11:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:11:20.199319 | orchestrator | 2025-11-01 15:11:20 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:11:20.199418 | orchestrator | 2025-11-01 15:11:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:11:23.247788 | orchestrator | 2025-11-01 15:11:23 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:11:23.247890 | orchestrator | 2025-11-01 15:11:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:11:26.297682 | orchestrator | 2025-11-01 15:11:26 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:11:26.297791 | orchestrator | 2025-11-01 15:11:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:11:29.340169 | orchestrator | 2025-11-01 15:11:29 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:11:29.340308 | orchestrator | 2025-11-01 15:11:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:11:32.393661 | orchestrator | 2025-11-01 15:11:32 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:11:32.393764 | orchestrator | 2025-11-01 15:11:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:11:35.442243 | orchestrator | 2025-11-01 15:11:35 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:11:35.442314 | orchestrator | 2025-11-01 15:11:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:11:38.490070 | orchestrator | 2025-11-01 15:11:38 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:11:38.490169 | orchestrator | 2025-11-01 15:11:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:11:41.538689 | orchestrator | 2025-11-01 15:11:41 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:11:41.538769 | orchestrator | 2025-11-01 15:11:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:11:44.579667 | orchestrator | 2025-11-01 15:11:44 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:11:44.579797 | orchestrator | 2025-11-01 15:11:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:11:47.631366 | orchestrator | 2025-11-01 15:11:47 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:11:47.631439 | orchestrator | 2025-11-01 15:11:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:11:50.681548 | orchestrator | 2025-11-01 15:11:50 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:11:50.681652 | orchestrator | 2025-11-01 15:11:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:11:53.725969 | orchestrator | 2025-11-01 15:11:53 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:11:53.726127 | orchestrator | 2025-11-01 15:11:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:11:56.772639 | orchestrator | 2025-11-01 15:11:56 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:11:56.772744 | orchestrator | 2025-11-01 15:11:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:11:59.819327 | orchestrator | 2025-11-01 15:11:59 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:11:59.819420 | orchestrator | 2025-11-01 15:11:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:12:02.869796 | orchestrator | 2025-11-01 15:12:02 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:12:02.869897 | orchestrator | 2025-11-01 15:12:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:12:05.914564 | orchestrator | 2025-11-01 15:12:05 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:12:05.914668 | orchestrator | 2025-11-01 15:12:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:12:08.963552 | orchestrator | 2025-11-01 15:12:08 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:12:08.963648 | orchestrator | 2025-11-01 15:12:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:12:12.005063 | orchestrator | 2025-11-01 15:12:12 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:12:12.005165 | orchestrator | 2025-11-01 15:12:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:12:15.059058 | orchestrator | 2025-11-01 15:12:15 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:12:15.059168 | orchestrator | 2025-11-01 15:12:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:12:18.107053 | orchestrator | 2025-11-01 15:12:18 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:12:18.107148 | orchestrator | 2025-11-01 15:12:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:12:21.153195 | orchestrator | 2025-11-01 15:12:21 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:12:21.153346 | orchestrator | 2025-11-01 15:12:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:12:24.200986 | orchestrator | 2025-11-01 15:12:24 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:12:24.201089 | orchestrator | 2025-11-01 15:12:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:12:27.260054 | orchestrator | 2025-11-01 15:12:27 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:12:27.260156 | orchestrator | 2025-11-01 15:12:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:12:30.316278 | orchestrator | 2025-11-01 15:12:30 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:12:30.316406 | orchestrator | 2025-11-01 15:12:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:12:33.364207 | orchestrator | 2025-11-01 15:12:33 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:12:33.364351 | orchestrator | 2025-11-01 15:12:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:12:36.410534 | orchestrator | 2025-11-01 15:12:36 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:12:36.410635 | orchestrator | 2025-11-01 15:12:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:12:39.458804 | orchestrator | 2025-11-01 15:12:39 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:12:39.458909 | orchestrator | 2025-11-01 15:12:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:12:42.506476 | orchestrator | 2025-11-01 15:12:42 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:12:42.506570 | orchestrator | 2025-11-01 15:12:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:12:45.561571 | orchestrator | 2025-11-01 15:12:45 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:12:45.561677 | orchestrator | 2025-11-01 15:12:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:12:48.619290 | orchestrator | 2025-11-01 15:12:48 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:12:48.619372 | orchestrator | 2025-11-01 15:12:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:12:51.665176 | orchestrator | 2025-11-01 15:12:51 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:12:51.665325 | orchestrator | 2025-11-01 15:12:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:12:54.711940 | orchestrator | 2025-11-01 15:12:54 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:12:54.712036 | orchestrator | 2025-11-01 15:12:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:12:57.762614 | orchestrator | 2025-11-01 15:12:57 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:12:57.762716 | orchestrator | 2025-11-01 15:12:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:13:00.808845 | orchestrator | 2025-11-01 15:13:00 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:13:00.808930 | orchestrator | 2025-11-01 15:13:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:13:03.861574 | orchestrator | 2025-11-01 15:13:03 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:13:03.861666 | orchestrator | 2025-11-01 15:13:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:13:06.903281 | orchestrator | 2025-11-01 15:13:06 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:13:06.903385 | orchestrator | 2025-11-01 15:13:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:13:09.950659 | orchestrator | 2025-11-01 15:13:09 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:13:09.950777 | orchestrator | 2025-11-01 15:13:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:13:13.005079 | orchestrator | 2025-11-01 15:13:13 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:13:13.005146 | orchestrator | 2025-11-01 15:13:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:13:16.053089 | orchestrator | 2025-11-01 15:13:16 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:13:16.053279 | orchestrator | 2025-11-01 15:13:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:13:19.108512 | orchestrator | 2025-11-01 15:13:19 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:13:19.108605 | orchestrator | 2025-11-01 15:13:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:13:22.150680 | orchestrator | 2025-11-01 15:13:22 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:13:22.150786 | orchestrator | 2025-11-01 15:13:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:13:25.194119 | orchestrator | 2025-11-01 15:13:25 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:13:25.194211 | orchestrator | 2025-11-01 15:13:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:13:28.239000 | orchestrator | 2025-11-01 15:13:28 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:13:28.239110 | orchestrator | 2025-11-01 15:13:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:13:31.289457 | orchestrator | 2025-11-01 15:13:31 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:13:31.289549 | orchestrator | 2025-11-01 15:13:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:13:34.339362 | orchestrator | 2025-11-01 15:13:34 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:13:34.339472 | orchestrator | 2025-11-01 15:13:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:13:37.385939 | orchestrator | 2025-11-01 15:13:37 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:13:37.386093 | orchestrator | 2025-11-01 15:13:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:13:40.435598 | orchestrator | 2025-11-01 15:13:40 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:13:40.435697 | orchestrator | 2025-11-01 15:13:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:13:43.489485 | orchestrator | 2025-11-01 15:13:43 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:13:43.489583 | orchestrator | 2025-11-01 15:13:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:13:46.531664 | orchestrator | 2025-11-01 15:13:46 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:13:46.531770 | orchestrator | 2025-11-01 15:13:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:13:49.577520 | orchestrator | 2025-11-01 15:13:49 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:13:49.577619 | orchestrator | 2025-11-01 15:13:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:13:52.631871 | orchestrator | 2025-11-01 15:13:52 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:13:52.631965 | orchestrator | 2025-11-01 15:13:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:13:55.681208 | orchestrator | 2025-11-01 15:13:55 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:13:55.681335 | orchestrator | 2025-11-01 15:13:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:13:58.727122 | orchestrator | 2025-11-01 15:13:58 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:13:58.727217 | orchestrator | 2025-11-01 15:13:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:14:01.774814 | orchestrator | 2025-11-01 15:14:01 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:14:01.774944 | orchestrator | 2025-11-01 15:14:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:14:04.824644 | orchestrator | 2025-11-01 15:14:04 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:14:04.824746 | orchestrator | 2025-11-01 15:14:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:14:07.873816 | orchestrator | 2025-11-01 15:14:07 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:14:07.873999 | orchestrator | 2025-11-01 15:14:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:14:10.917531 | orchestrator | 2025-11-01 15:14:10 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:14:10.917629 | orchestrator | 2025-11-01 15:14:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:14:13.964827 | orchestrator | 2025-11-01 15:14:13 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:14:13.964926 | orchestrator | 2025-11-01 15:14:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:14:17.009873 | orchestrator | 2025-11-01 15:14:17 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:14:17.009978 | orchestrator | 2025-11-01 15:14:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:14:20.054855 | orchestrator | 2025-11-01 15:14:20 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:14:20.054940 | orchestrator | 2025-11-01 15:14:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:14:23.097342 | orchestrator | 2025-11-01 15:14:23 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:14:23.097443 | orchestrator | 2025-11-01 15:14:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:14:26.143420 | orchestrator | 2025-11-01 15:14:26 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:14:26.143515 | orchestrator | 2025-11-01 15:14:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:14:29.192466 | orchestrator | 2025-11-01 15:14:29 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:14:29.192567 | orchestrator | 2025-11-01 15:14:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:14:32.243462 | orchestrator | 2025-11-01 15:14:32 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:14:32.243559 | orchestrator | 2025-11-01 15:14:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:14:35.289866 | orchestrator | 2025-11-01 15:14:35 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:14:35.289972 | orchestrator | 2025-11-01 15:14:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:14:38.336745 | orchestrator | 2025-11-01 15:14:38 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:14:38.336847 | orchestrator | 2025-11-01 15:14:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:14:41.386102 | orchestrator | 2025-11-01 15:14:41 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:14:41.386202 | orchestrator | 2025-11-01 15:14:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:14:44.435508 | orchestrator | 2025-11-01 15:14:44 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:14:44.435603 | orchestrator | 2025-11-01 15:14:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:14:47.481593 | orchestrator | 2025-11-01 15:14:47 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:14:47.481725 | orchestrator | 2025-11-01 15:14:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:14:50.530278 | orchestrator | 2025-11-01 15:14:50 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:14:50.530379 | orchestrator | 2025-11-01 15:14:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:14:53.569933 | orchestrator | 2025-11-01 15:14:53 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:14:53.570089 | orchestrator | 2025-11-01 15:14:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:14:56.619280 | orchestrator | 2025-11-01 15:14:56 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:14:56.619380 | orchestrator | 2025-11-01 15:14:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:14:59.660372 | orchestrator | 2025-11-01 15:14:59 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:14:59.660479 | orchestrator | 2025-11-01 15:14:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:15:02.708704 | orchestrator | 2025-11-01 15:15:02 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:15:02.708789 | orchestrator | 2025-11-01 15:15:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:15:05.763544 | orchestrator | 2025-11-01 15:15:05 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:15:05.763764 | orchestrator | 2025-11-01 15:15:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:15:08.807486 | orchestrator | 2025-11-01 15:15:08 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:15:08.807580 | orchestrator | 2025-11-01 15:15:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:15:11.852782 | orchestrator | 2025-11-01 15:15:11 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:15:11.852874 | orchestrator | 2025-11-01 15:15:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:15:14.895818 | orchestrator | 2025-11-01 15:15:14 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:15:14.895917 | orchestrator | 2025-11-01 15:15:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:15:17.943409 | orchestrator | 2025-11-01 15:15:17 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:15:17.943516 | orchestrator | 2025-11-01 15:15:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:15:20.988971 | orchestrator | 2025-11-01 15:15:20 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:15:20.989081 | orchestrator | 2025-11-01 15:15:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:15:24.031168 | orchestrator | 2025-11-01 15:15:24 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:15:24.031301 | orchestrator | 2025-11-01 15:15:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:15:27.072670 | orchestrator | 2025-11-01 15:15:27 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:15:27.072767 | orchestrator | 2025-11-01 15:15:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:15:30.113965 | orchestrator | 2025-11-01 15:15:30 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:15:30.114119 | orchestrator | 2025-11-01 15:15:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:15:33.159392 | orchestrator | 2025-11-01 15:15:33 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:15:33.159520 | orchestrator | 2025-11-01 15:15:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:15:36.215826 | orchestrator | 2025-11-01 15:15:36 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:15:36.215920 | orchestrator | 2025-11-01 15:15:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:15:39.266393 | orchestrator | 2025-11-01 15:15:39 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:15:39.266496 | orchestrator | 2025-11-01 15:15:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:15:42.321815 | orchestrator | 2025-11-01 15:15:42 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:15:42.321916 | orchestrator | 2025-11-01 15:15:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:15:45.364199 | orchestrator | 2025-11-01 15:15:45 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:15:45.364348 | orchestrator | 2025-11-01 15:15:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:15:48.412997 | orchestrator | 2025-11-01 15:15:48 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:15:48.413101 | orchestrator | 2025-11-01 15:15:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:15:51.454643 | orchestrator | 2025-11-01 15:15:51 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:15:51.454742 | orchestrator | 2025-11-01 15:15:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:15:54.503112 | orchestrator | 2025-11-01 15:15:54 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:15:54.503200 | orchestrator | 2025-11-01 15:15:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:15:57.541012 | orchestrator | 2025-11-01 15:15:57 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:15:57.541115 | orchestrator | 2025-11-01 15:15:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:16:00.589583 | orchestrator | 2025-11-01 15:16:00 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:16:00.589681 | orchestrator | 2025-11-01 15:16:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:16:03.640883 | orchestrator | 2025-11-01 15:16:03 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:16:03.641015 | orchestrator | 2025-11-01 15:16:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:16:06.685103 | orchestrator | 2025-11-01 15:16:06 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:16:06.685194 | orchestrator | 2025-11-01 15:16:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:16:09.736034 | orchestrator | 2025-11-01 15:16:09 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:16:09.736127 | orchestrator | 2025-11-01 15:16:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:16:12.786076 | orchestrator | 2025-11-01 15:16:12 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:16:12.786171 | orchestrator | 2025-11-01 15:16:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:16:15.838521 | orchestrator | 2025-11-01 15:16:15 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:16:15.838624 | orchestrator | 2025-11-01 15:16:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:16:18.887796 | orchestrator | 2025-11-01 15:16:18 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:16:18.887919 | orchestrator | 2025-11-01 15:16:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:16:21.934090 | orchestrator | 2025-11-01 15:16:21 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:16:21.934191 | orchestrator | 2025-11-01 15:16:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:16:24.988704 | orchestrator | 2025-11-01 15:16:24 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state STARTED 2025-11-01 15:16:24.988796 | orchestrator | 2025-11-01 15:16:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 15:16:28.061080 | orchestrator | 2025-11-01 15:16:28.061182 | orchestrator | 2025-11-01 15:16:28.061197 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-11-01 15:16:28.061209 | orchestrator | 2025-11-01 15:16:28.061221 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-11-01 15:16:28.061232 | orchestrator | Saturday 01 November 2025 14:17:09 +0000 (0:00:00.111) 0:00:00.111 ***** 2025-11-01 15:16:28.061244 | orchestrator | changed: [localhost] 2025-11-01 15:16:28.061308 | orchestrator | 2025-11-01 15:16:28.061320 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-11-01 15:16:28.061331 | orchestrator | Saturday 01 November 2025 14:17:11 +0000 (0:00:01.755) 0:00:01.867 ***** 2025-11-01 15:16:28.061342 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2025-11-01 15:16:28.061354 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (2 retries left). 2025-11-01 15:16:28.061365 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (1 retries left). 2025-11-01 15:16:28.061376 | orchestrator | 2025-11-01 15:16:28.061387 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.061397 | orchestrator | 2025-11-01 15:16:28.061408 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.061419 | orchestrator | 2025-11-01 15:16:28.061430 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.061441 | orchestrator | 2025-11-01 15:16:28.061452 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.061463 | orchestrator | 2025-11-01 15:16:28.061473 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.061484 | orchestrator | 2025-11-01 15:16:28.061495 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.061506 | orchestrator | 2025-11-01 15:16:28.061517 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.061528 | orchestrator | 2025-11-01 15:16:28.061539 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.061550 | orchestrator | 2025-11-01 15:16:28.061561 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.061572 | orchestrator | 2025-11-01 15:16:28.061583 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.061594 | orchestrator | 2025-11-01 15:16:28.061605 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.061615 | orchestrator | 2025-11-01 15:16:28.061626 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.061637 | orchestrator | 2025-11-01 15:16:28.061648 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.061659 | orchestrator | 2025-11-01 15:16:28.061670 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.061681 | orchestrator | 2025-11-01 15:16:28.061692 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.061703 | orchestrator | 2025-11-01 15:16:28.061714 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.061749 | orchestrator | 2025-11-01 15:16:28.061761 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.061771 | orchestrator | 2025-11-01 15:16:28.061782 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.061793 | orchestrator | 2025-11-01 15:16:28.061804 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.061815 | orchestrator | 2025-11-01 15:16:28.061826 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.061837 | orchestrator | 2025-11-01 15:16:28.061847 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.061858 | orchestrator | 2025-11-01 15:16:28.061869 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.061880 | orchestrator | 2025-11-01 15:16:28.061906 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.061917 | orchestrator | 2025-11-01 15:16:28.061928 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.061939 | orchestrator | 2025-11-01 15:16:28.061950 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.061961 | orchestrator | 2025-11-01 15:16:28.061971 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.061982 | orchestrator | 2025-11-01 15:16:28.061993 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062004 | orchestrator | 2025-11-01 15:16:28.062015 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062082 | orchestrator | 2025-11-01 15:16:28.062093 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062104 | orchestrator | 2025-11-01 15:16:28.062115 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062126 | orchestrator | 2025-11-01 15:16:28.062136 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062147 | orchestrator | 2025-11-01 15:16:28.062158 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062168 | orchestrator | 2025-11-01 15:16:28.062179 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062190 | orchestrator | 2025-11-01 15:16:28.062200 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062211 | orchestrator | 2025-11-01 15:16:28.062222 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062232 | orchestrator | 2025-11-01 15:16:28.062243 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062276 | orchestrator | 2025-11-01 15:16:28.062287 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062298 | orchestrator | 2025-11-01 15:16:28.062309 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062319 | orchestrator | 2025-11-01 15:16:28.062330 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062340 | orchestrator | 2025-11-01 15:16:28.062351 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062361 | orchestrator | 2025-11-01 15:16:28.062372 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062383 | orchestrator | 2025-11-01 15:16:28.062412 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062423 | orchestrator | 2025-11-01 15:16:28.062434 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062445 | orchestrator | 2025-11-01 15:16:28.062455 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062466 | orchestrator | 2025-11-01 15:16:28.062477 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062487 | orchestrator | 2025-11-01 15:16:28.062507 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062518 | orchestrator | 2025-11-01 15:16:28.062529 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062539 | orchestrator | 2025-11-01 15:16:28.062550 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062561 | orchestrator | 2025-11-01 15:16:28.062572 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062583 | orchestrator | 2025-11-01 15:16:28.062593 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062604 | orchestrator | 2025-11-01 15:16:28.062615 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062626 | orchestrator | 2025-11-01 15:16:28.062636 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062647 | orchestrator | 2025-11-01 15:16:28.062658 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062668 | orchestrator | 2025-11-01 15:16:28.062679 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062690 | orchestrator | 2025-11-01 15:16:28.062700 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062711 | orchestrator | 2025-11-01 15:16:28.062722 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062732 | orchestrator | 2025-11-01 15:16:28.062743 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062754 | orchestrator | 2025-11-01 15:16:28.062765 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062775 | orchestrator | 2025-11-01 15:16:28.062786 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062797 | orchestrator | 2025-11-01 15:16:28.062808 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062818 | orchestrator | 2025-11-01 15:16:28.062829 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062840 | orchestrator | 2025-11-01 15:16:28.062850 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062861 | orchestrator | 2025-11-01 15:16:28.062872 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062882 | orchestrator | 2025-11-01 15:16:28.062893 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062904 | orchestrator | 2025-11-01 15:16:28.062915 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062925 | orchestrator | 2025-11-01 15:16:28.062936 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062947 | orchestrator | 2025-11-01 15:16:28.062957 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062968 | orchestrator | 2025-11-01 15:16:28.062978 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.062989 | orchestrator | 2025-11-01 15:16:28.063000 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063011 | orchestrator | 2025-11-01 15:16:28.063021 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063032 | orchestrator | 2025-11-01 15:16:28.063043 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063053 | orchestrator | 2025-11-01 15:16:28.063064 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063075 | orchestrator | 2025-11-01 15:16:28.063091 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063102 | orchestrator | 2025-11-01 15:16:28.063113 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063123 | orchestrator | 2025-11-01 15:16:28.063134 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063152 | orchestrator | 2025-11-01 15:16:28.063162 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063173 | orchestrator | 2025-11-01 15:16:28.063184 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063195 | orchestrator | 2025-11-01 15:16:28.063205 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063216 | orchestrator | 2025-11-01 15:16:28.063227 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063238 | orchestrator | 2025-11-01 15:16:28.063263 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063274 | orchestrator | 2025-11-01 15:16:28.063285 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063296 | orchestrator | 2025-11-01 15:16:28.063306 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063317 | orchestrator | 2025-11-01 15:16:28.063328 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063338 | orchestrator | 2025-11-01 15:16:28.063349 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063360 | orchestrator | 2025-11-01 15:16:28.063370 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063381 | orchestrator | 2025-11-01 15:16:28.063392 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063402 | orchestrator | 2025-11-01 15:16:28.063413 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063424 | orchestrator | 2025-11-01 15:16:28.063434 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063445 | orchestrator | 2025-11-01 15:16:28.063456 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063466 | orchestrator | 2025-11-01 15:16:28.063477 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063488 | orchestrator | 2025-11-01 15:16:28.063498 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063509 | orchestrator | 2025-11-01 15:16:28.063526 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063537 | orchestrator | 2025-11-01 15:16:28.063548 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063559 | orchestrator | 2025-11-01 15:16:28.063569 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063580 | orchestrator | 2025-11-01 15:16:28.063591 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063601 | orchestrator | 2025-11-01 15:16:28.063612 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063623 | orchestrator | 2025-11-01 15:16:28.063634 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063644 | orchestrator | 2025-11-01 15:16:28.063655 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063666 | orchestrator | 2025-11-01 15:16:28.063676 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063687 | orchestrator | 2025-11-01 15:16:28.063698 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063709 | orchestrator | 2025-11-01 15:16:28.063719 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063730 | orchestrator | 2025-11-01 15:16:28.063741 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063752 | orchestrator | 2025-11-01 15:16:28.063762 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063773 | orchestrator | 2025-11-01 15:16:28.063784 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063801 | orchestrator | 2025-11-01 15:16:28.063812 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063823 | orchestrator | 2025-11-01 15:16:28.063833 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063844 | orchestrator | 2025-11-01 15:16:28.063855 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063866 | orchestrator | 2025-11-01 15:16:28.063877 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063887 | orchestrator | 2025-11-01 15:16:28.063898 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063909 | orchestrator | 2025-11-01 15:16:28.063919 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063930 | orchestrator | 2025-11-01 15:16:28.063941 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063952 | orchestrator | 2025-11-01 15:16:28.063963 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063973 | orchestrator | 2025-11-01 15:16:28.063984 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 15:16:28.063995 | orchestrator | changed: [localhost] 2025-11-01 15:16:28.064006 | orchestrator | 2025-11-01 15:16:28.064016 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-11-01 15:16:28.064028 | orchestrator | Saturday 01 November 2025 15:16:00 +0000 (0:58:49.598) 0:58:51.466 ***** 2025-11-01 15:16:28.064038 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2025-11-01 15:16:28.064049 | orchestrator | changed: [localhost] 2025-11-01 15:16:28.064060 | orchestrator | 2025-11-01 15:16:28.064071 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 15:16:28.064081 | orchestrator | 2025-11-01 15:16:28.064092 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 15:16:28.064103 | orchestrator | Saturday 01 November 2025 15:16:26 +0000 (0:00:25.903) 0:59:17.369 ***** 2025-11-01 15:16:28.064114 | orchestrator | ok: [testbed-node-0] 2025-11-01 15:16:28.064125 | orchestrator | ok: [testbed-node-1] 2025-11-01 15:16:28.064136 | orchestrator | ok: [testbed-node-2] 2025-11-01 15:16:28.064146 | orchestrator | 2025-11-01 15:16:28.064157 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 15:16:28.064173 | orchestrator | Saturday 01 November 2025 15:16:27 +0000 (0:00:00.367) 0:59:17.736 ***** 2025-11-01 15:16:28.064184 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-11-01 15:16:28.064195 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-11-01 15:16:28.064206 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-11-01 15:16:28.064217 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-11-01 15:16:28.064227 | orchestrator | 2025-11-01 15:16:28.064238 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-11-01 15:16:28.064294 | orchestrator | skipping: no hosts matched 2025-11-01 15:16:28.064307 | orchestrator | 2025-11-01 15:16:28.064317 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 15:16:28.064329 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 15:16:28.064342 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 15:16:28.064355 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 15:16:28.064366 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 15:16:28.064376 | orchestrator | 2025-11-01 15:16:28.064387 | orchestrator | 2025-11-01 15:16:28.064411 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 15:16:28.064422 | orchestrator | Saturday 01 November 2025 15:16:27 +0000 (0:00:00.646) 0:59:18.382 ***** 2025-11-01 15:16:28.064433 | orchestrator | =============================================================================== 2025-11-01 15:16:28.064444 | orchestrator | Download ironic-agent initramfs -------------------------------------- 3529.60s 2025-11-01 15:16:28.064455 | orchestrator | Download ironic-agent kernel ------------------------------------------- 25.90s 2025-11-01 15:16:28.064472 | orchestrator | Ensure the destination directory exists --------------------------------- 1.76s 2025-11-01 15:16:28.064483 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.65s 2025-11-01 15:16:28.064494 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2025-11-01 15:16:28.064505 | orchestrator | 2025-11-01 15:16:28 | INFO  | Task 090ba3b4-fe8a-44c3-976e-f1f5ce0823d9 is in state SUCCESS 2025-11-01 15:16:28.064515 | orchestrator | 2025-11-01 15:16:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 15:16:31.105755 | orchestrator | 2025-11-01 15:16:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 15:16:34.151366 | orchestrator | 2025-11-01 15:16:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 15:16:37.194820 | orchestrator | 2025-11-01 15:16:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 15:16:40.249021 | orchestrator | 2025-11-01 15:16:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 15:16:43.289652 | orchestrator | 2025-11-01 15:16:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 15:16:46.332578 | orchestrator | 2025-11-01 15:16:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 15:16:49.378466 | orchestrator | 2025-11-01 15:16:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 15:16:52.425541 | orchestrator | 2025-11-01 15:16:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 15:16:55.471393 | orchestrator | 2025-11-01 15:16:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 15:16:58.514778 | orchestrator | 2025-11-01 15:16:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 15:17:01.556445 | orchestrator | 2025-11-01 15:17:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 15:17:04.598853 | orchestrator | 2025-11-01 15:17:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 15:17:07.637353 | orchestrator | 2025-11-01 15:17:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 15:17:10.683436 | orchestrator | 2025-11-01 15:17:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 15:17:13.725607 | orchestrator | 2025-11-01 15:17:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 15:17:16.772613 | orchestrator | 2025-11-01 15:17:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 15:17:19.814480 | orchestrator | 2025-11-01 15:17:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 15:17:22.865684 | orchestrator | 2025-11-01 15:17:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 15:17:25.913584 | orchestrator | 2025-11-01 15:17:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 15:17:28.955506 | orchestrator | 2025-11-01 15:17:29.310909 | orchestrator | 2025-11-01 15:17:29.313422 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sat Nov 1 15:17:29 UTC 2025 2025-11-01 15:17:29.313455 | orchestrator | 2025-11-01 15:17:29.647178 | orchestrator | ok: Runtime: 1:22:14.713853 2025-11-01 15:17:29.917183 | 2025-11-01 15:17:29.917343 | TASK [Bootstrap services] 2025-11-01 15:17:30.647471 | orchestrator | 2025-11-01 15:17:30.647653 | orchestrator | # BOOTSTRAP 2025-11-01 15:17:30.647677 | orchestrator | 2025-11-01 15:17:30.647691 | orchestrator | + set -e 2025-11-01 15:17:30.647704 | orchestrator | + echo 2025-11-01 15:17:30.647718 | orchestrator | + echo '# BOOTSTRAP' 2025-11-01 15:17:30.647736 | orchestrator | + echo 2025-11-01 15:17:30.647779 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-11-01 15:17:30.657146 | orchestrator | + set -e 2025-11-01 15:17:30.657285 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-11-01 15:17:35.492025 | orchestrator | 2025-11-01 15:17:35 | INFO  | It takes a moment until task f041c082-2bbf-4dc5-8187-172fbd48b8de (flavor-manager) has been started and output is visible here. 2025-11-01 15:17:44.771880 | orchestrator | 2025-11-01 15:17:39 | INFO  | Flavor SCS-1L-1 created 2025-11-01 15:17:44.772004 | orchestrator | 2025-11-01 15:17:39 | INFO  | Flavor SCS-1L-1-5 created 2025-11-01 15:17:44.772021 | orchestrator | 2025-11-01 15:17:40 | INFO  | Flavor SCS-1V-2 created 2025-11-01 15:17:44.772031 | orchestrator | 2025-11-01 15:17:40 | INFO  | Flavor SCS-1V-2-5 created 2025-11-01 15:17:44.772042 | orchestrator | 2025-11-01 15:17:40 | INFO  | Flavor SCS-1V-4 created 2025-11-01 15:17:44.772052 | orchestrator | 2025-11-01 15:17:40 | INFO  | Flavor SCS-1V-4-10 created 2025-11-01 15:17:44.772062 | orchestrator | 2025-11-01 15:17:40 | INFO  | Flavor SCS-1V-8 created 2025-11-01 15:17:44.772073 | orchestrator | 2025-11-01 15:17:40 | INFO  | Flavor SCS-1V-8-20 created 2025-11-01 15:17:44.772092 | orchestrator | 2025-11-01 15:17:41 | INFO  | Flavor SCS-2V-4 created 2025-11-01 15:17:44.772102 | orchestrator | 2025-11-01 15:17:41 | INFO  | Flavor SCS-2V-4-10 created 2025-11-01 15:17:44.772113 | orchestrator | 2025-11-01 15:17:41 | INFO  | Flavor SCS-2V-8 created 2025-11-01 15:17:44.772123 | orchestrator | 2025-11-01 15:17:41 | INFO  | Flavor SCS-2V-8-20 created 2025-11-01 15:17:44.772132 | orchestrator | 2025-11-01 15:17:41 | INFO  | Flavor SCS-2V-16 created 2025-11-01 15:17:44.772142 | orchestrator | 2025-11-01 15:17:41 | INFO  | Flavor SCS-2V-16-50 created 2025-11-01 15:17:44.772152 | orchestrator | 2025-11-01 15:17:42 | INFO  | Flavor SCS-4V-8 created 2025-11-01 15:17:44.772162 | orchestrator | 2025-11-01 15:17:42 | INFO  | Flavor SCS-4V-8-20 created 2025-11-01 15:17:44.772171 | orchestrator | 2025-11-01 15:17:42 | INFO  | Flavor SCS-4V-16 created 2025-11-01 15:17:44.772181 | orchestrator | 2025-11-01 15:17:42 | INFO  | Flavor SCS-4V-16-50 created 2025-11-01 15:17:44.772191 | orchestrator | 2025-11-01 15:17:42 | INFO  | Flavor SCS-4V-32 created 2025-11-01 15:17:44.772200 | orchestrator | 2025-11-01 15:17:43 | INFO  | Flavor SCS-4V-32-100 created 2025-11-01 15:17:44.772210 | orchestrator | 2025-11-01 15:17:43 | INFO  | Flavor SCS-8V-16 created 2025-11-01 15:17:44.772220 | orchestrator | 2025-11-01 15:17:43 | INFO  | Flavor SCS-8V-16-50 created 2025-11-01 15:17:44.772230 | orchestrator | 2025-11-01 15:17:43 | INFO  | Flavor SCS-8V-32 created 2025-11-01 15:17:44.772240 | orchestrator | 2025-11-01 15:17:43 | INFO  | Flavor SCS-8V-32-100 created 2025-11-01 15:17:44.772249 | orchestrator | 2025-11-01 15:17:43 | INFO  | Flavor SCS-16V-32 created 2025-11-01 15:17:44.772299 | orchestrator | 2025-11-01 15:17:44 | INFO  | Flavor SCS-16V-32-100 created 2025-11-01 15:17:44.772311 | orchestrator | 2025-11-01 15:17:44 | INFO  | Flavor SCS-2V-4-20s created 2025-11-01 15:17:44.772320 | orchestrator | 2025-11-01 15:17:44 | INFO  | Flavor SCS-4V-8-50s created 2025-11-01 15:17:44.772330 | orchestrator | 2025-11-01 15:17:44 | INFO  | Flavor SCS-8V-32-100s created 2025-11-01 15:17:47.122209 | orchestrator | 2025-11-01 15:17:47 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-11-01 15:17:47.203340 | orchestrator | 2025-11-01 15:17:47 | INFO  | Task 9935be6d-2a86-463b-934d-ceb1a184052d (bootstrap-basic) was prepared for execution. 2025-11-01 15:17:47.203421 | orchestrator | 2025-11-01 15:17:47 | INFO  | It takes a moment until task 9935be6d-2a86-463b-934d-ceb1a184052d (bootstrap-basic) has been started and output is visible here. 2025-11-01 15:18:50.248956 | orchestrator | 2025-11-01 15:18:50.249071 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-11-01 15:18:50.249087 | orchestrator | 2025-11-01 15:18:50.249099 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-01 15:18:50.249109 | orchestrator | Saturday 01 November 2025 15:17:51 +0000 (0:00:00.074) 0:00:00.074 ***** 2025-11-01 15:18:50.249119 | orchestrator | ok: [localhost] 2025-11-01 15:18:50.249130 | orchestrator | 2025-11-01 15:18:50.249140 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-11-01 15:18:50.249150 | orchestrator | Saturday 01 November 2025 15:17:53 +0000 (0:00:02.010) 0:00:02.085 ***** 2025-11-01 15:18:50.249159 | orchestrator | ok: [localhost] 2025-11-01 15:18:50.249169 | orchestrator | 2025-11-01 15:18:50.249179 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-11-01 15:18:50.249189 | orchestrator | Saturday 01 November 2025 15:18:02 +0000 (0:00:08.933) 0:00:11.018 ***** 2025-11-01 15:18:50.249199 | orchestrator | changed: [localhost] 2025-11-01 15:18:50.249209 | orchestrator | 2025-11-01 15:18:50.249219 | orchestrator | TASK [Get volume type local] *************************************************** 2025-11-01 15:18:50.249229 | orchestrator | Saturday 01 November 2025 15:18:10 +0000 (0:00:08.130) 0:00:19.149 ***** 2025-11-01 15:18:50.249239 | orchestrator | ok: [localhost] 2025-11-01 15:18:50.249248 | orchestrator | 2025-11-01 15:18:50.249258 | orchestrator | TASK [Create volume type local] ************************************************ 2025-11-01 15:18:50.249301 | orchestrator | Saturday 01 November 2025 15:18:18 +0000 (0:00:07.752) 0:00:26.901 ***** 2025-11-01 15:18:50.249315 | orchestrator | changed: [localhost] 2025-11-01 15:18:50.249326 | orchestrator | 2025-11-01 15:18:50.249336 | orchestrator | TASK [Create public network] *************************************************** 2025-11-01 15:18:50.249345 | orchestrator | Saturday 01 November 2025 15:18:25 +0000 (0:00:06.955) 0:00:33.856 ***** 2025-11-01 15:18:50.249355 | orchestrator | changed: [localhost] 2025-11-01 15:18:50.249365 | orchestrator | 2025-11-01 15:18:50.249375 | orchestrator | TASK [Set public network to default] ******************************************* 2025-11-01 15:18:50.249385 | orchestrator | Saturday 01 November 2025 15:18:31 +0000 (0:00:05.704) 0:00:39.561 ***** 2025-11-01 15:18:50.249395 | orchestrator | changed: [localhost] 2025-11-01 15:18:50.249404 | orchestrator | 2025-11-01 15:18:50.249414 | orchestrator | TASK [Create public subnet] **************************************************** 2025-11-01 15:18:50.249434 | orchestrator | Saturday 01 November 2025 15:18:37 +0000 (0:00:06.414) 0:00:45.976 ***** 2025-11-01 15:18:50.249444 | orchestrator | changed: [localhost] 2025-11-01 15:18:50.249454 | orchestrator | 2025-11-01 15:18:50.249464 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-11-01 15:18:50.249474 | orchestrator | Saturday 01 November 2025 15:18:42 +0000 (0:00:04.925) 0:00:50.901 ***** 2025-11-01 15:18:50.249484 | orchestrator | changed: [localhost] 2025-11-01 15:18:50.249495 | orchestrator | 2025-11-01 15:18:50.249506 | orchestrator | TASK [Create manager role] ***************************************************** 2025-11-01 15:18:50.249517 | orchestrator | Saturday 01 November 2025 15:18:46 +0000 (0:00:03.912) 0:00:54.814 ***** 2025-11-01 15:18:50.249527 | orchestrator | ok: [localhost] 2025-11-01 15:18:50.249538 | orchestrator | 2025-11-01 15:18:50.249549 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 15:18:50.249560 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 15:18:50.249572 | orchestrator | 2025-11-01 15:18:50.249583 | orchestrator | 2025-11-01 15:18:50.249594 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 15:18:50.249654 | orchestrator | Saturday 01 November 2025 15:18:49 +0000 (0:00:03.586) 0:00:58.400 ***** 2025-11-01 15:18:50.249666 | orchestrator | =============================================================================== 2025-11-01 15:18:50.249677 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.93s 2025-11-01 15:18:50.249688 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.13s 2025-11-01 15:18:50.249698 | orchestrator | Get volume type local --------------------------------------------------- 7.75s 2025-11-01 15:18:50.249709 | orchestrator | Create volume type local ------------------------------------------------ 6.96s 2025-11-01 15:18:50.249719 | orchestrator | Set public network to default ------------------------------------------- 6.41s 2025-11-01 15:18:50.249730 | orchestrator | Create public network --------------------------------------------------- 5.70s 2025-11-01 15:18:50.249741 | orchestrator | Create public subnet ---------------------------------------------------- 4.93s 2025-11-01 15:18:50.249752 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.91s 2025-11-01 15:18:50.249763 | orchestrator | Create manager role ----------------------------------------------------- 3.59s 2025-11-01 15:18:50.249773 | orchestrator | Gathering Facts --------------------------------------------------------- 2.01s 2025-11-01 15:18:52.811419 | orchestrator | 2025-11-01 15:18:52 | INFO  | It takes a moment until task 2449215f-c024-4953-bd94-45a030c107b5 (image-manager) has been started and output is visible here. 2025-11-01 15:19:36.232585 | orchestrator | 2025-11-01 15:18:55 | INFO  | Processing image 'Cirros 0.6.2' 2025-11-01 15:19:36.232702 | orchestrator | 2025-11-01 15:18:55 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-11-01 15:19:36.232721 | orchestrator | 2025-11-01 15:18:55 | INFO  | Importing image Cirros 0.6.2 2025-11-01 15:19:36.232733 | orchestrator | 2025-11-01 15:18:55 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-11-01 15:19:36.232746 | orchestrator | 2025-11-01 15:18:58 | INFO  | Waiting for image to leave queued state... 2025-11-01 15:19:36.232758 | orchestrator | 2025-11-01 15:19:00 | INFO  | Waiting for import to complete... 2025-11-01 15:19:36.232769 | orchestrator | 2025-11-01 15:19:10 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-11-01 15:19:36.232780 | orchestrator | 2025-11-01 15:19:10 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-11-01 15:19:36.232791 | orchestrator | 2025-11-01 15:19:10 | INFO  | Setting internal_version = 0.6.2 2025-11-01 15:19:36.232803 | orchestrator | 2025-11-01 15:19:10 | INFO  | Setting image_original_user = cirros 2025-11-01 15:19:36.232814 | orchestrator | 2025-11-01 15:19:10 | INFO  | Adding tag os:cirros 2025-11-01 15:19:36.232825 | orchestrator | 2025-11-01 15:19:11 | INFO  | Setting property architecture: x86_64 2025-11-01 15:19:36.232836 | orchestrator | 2025-11-01 15:19:11 | INFO  | Setting property hw_disk_bus: scsi 2025-11-01 15:19:36.232847 | orchestrator | 2025-11-01 15:19:11 | INFO  | Setting property hw_rng_model: virtio 2025-11-01 15:19:36.232858 | orchestrator | 2025-11-01 15:19:12 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-11-01 15:19:36.232869 | orchestrator | 2025-11-01 15:19:12 | INFO  | Setting property hw_watchdog_action: reset 2025-11-01 15:19:36.232880 | orchestrator | 2025-11-01 15:19:12 | INFO  | Setting property hypervisor_type: qemu 2025-11-01 15:19:36.232891 | orchestrator | 2025-11-01 15:19:12 | INFO  | Setting property os_distro: cirros 2025-11-01 15:19:36.232901 | orchestrator | 2025-11-01 15:19:12 | INFO  | Setting property os_purpose: minimal 2025-11-01 15:19:36.232912 | orchestrator | 2025-11-01 15:19:13 | INFO  | Setting property replace_frequency: never 2025-11-01 15:19:36.232946 | orchestrator | 2025-11-01 15:19:13 | INFO  | Setting property uuid_validity: none 2025-11-01 15:19:36.232958 | orchestrator | 2025-11-01 15:19:13 | INFO  | Setting property provided_until: none 2025-11-01 15:19:36.232976 | orchestrator | 2025-11-01 15:19:13 | INFO  | Setting property image_description: Cirros 2025-11-01 15:19:36.232993 | orchestrator | 2025-11-01 15:19:14 | INFO  | Setting property image_name: Cirros 2025-11-01 15:19:36.233004 | orchestrator | 2025-11-01 15:19:14 | INFO  | Setting property internal_version: 0.6.2 2025-11-01 15:19:36.233014 | orchestrator | 2025-11-01 15:19:14 | INFO  | Setting property image_original_user: cirros 2025-11-01 15:19:36.233025 | orchestrator | 2025-11-01 15:19:15 | INFO  | Setting property os_version: 0.6.2 2025-11-01 15:19:36.233036 | orchestrator | 2025-11-01 15:19:15 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-11-01 15:19:36.233049 | orchestrator | 2025-11-01 15:19:15 | INFO  | Setting property image_build_date: 2023-05-30 2025-11-01 15:19:36.233059 | orchestrator | 2025-11-01 15:19:15 | INFO  | Checking status of 'Cirros 0.6.2' 2025-11-01 15:19:36.233070 | orchestrator | 2025-11-01 15:19:15 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-11-01 15:19:36.233080 | orchestrator | 2025-11-01 15:19:15 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-11-01 15:19:36.233091 | orchestrator | 2025-11-01 15:19:16 | INFO  | Processing image 'Cirros 0.6.3' 2025-11-01 15:19:36.233102 | orchestrator | 2025-11-01 15:19:16 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-11-01 15:19:36.233113 | orchestrator | 2025-11-01 15:19:16 | INFO  | Importing image Cirros 0.6.3 2025-11-01 15:19:36.233124 | orchestrator | 2025-11-01 15:19:16 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-11-01 15:19:36.233134 | orchestrator | 2025-11-01 15:19:17 | INFO  | Waiting for image to leave queued state... 2025-11-01 15:19:36.233145 | orchestrator | 2025-11-01 15:19:19 | INFO  | Waiting for import to complete... 2025-11-01 15:19:36.233172 | orchestrator | 2025-11-01 15:19:30 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-11-01 15:19:36.233184 | orchestrator | 2025-11-01 15:19:30 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-11-01 15:19:36.233195 | orchestrator | 2025-11-01 15:19:30 | INFO  | Setting internal_version = 0.6.3 2025-11-01 15:19:36.233206 | orchestrator | 2025-11-01 15:19:30 | INFO  | Setting image_original_user = cirros 2025-11-01 15:19:36.233216 | orchestrator | 2025-11-01 15:19:30 | INFO  | Adding tag os:cirros 2025-11-01 15:19:36.233227 | orchestrator | 2025-11-01 15:19:30 | INFO  | Setting property architecture: x86_64 2025-11-01 15:19:36.233238 | orchestrator | 2025-11-01 15:19:30 | INFO  | Setting property hw_disk_bus: scsi 2025-11-01 15:19:36.233248 | orchestrator | 2025-11-01 15:19:31 | INFO  | Setting property hw_rng_model: virtio 2025-11-01 15:19:36.233259 | orchestrator | 2025-11-01 15:19:31 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-11-01 15:19:36.233296 | orchestrator | 2025-11-01 15:19:31 | INFO  | Setting property hw_watchdog_action: reset 2025-11-01 15:19:36.233307 | orchestrator | 2025-11-01 15:19:31 | INFO  | Setting property hypervisor_type: qemu 2025-11-01 15:19:36.233318 | orchestrator | 2025-11-01 15:19:32 | INFO  | Setting property os_distro: cirros 2025-11-01 15:19:36.233338 | orchestrator | 2025-11-01 15:19:32 | INFO  | Setting property os_purpose: minimal 2025-11-01 15:19:36.233349 | orchestrator | 2025-11-01 15:19:32 | INFO  | Setting property replace_frequency: never 2025-11-01 15:19:36.233359 | orchestrator | 2025-11-01 15:19:33 | INFO  | Setting property uuid_validity: none 2025-11-01 15:19:36.233370 | orchestrator | 2025-11-01 15:19:33 | INFO  | Setting property provided_until: none 2025-11-01 15:19:36.233381 | orchestrator | 2025-11-01 15:19:33 | INFO  | Setting property image_description: Cirros 2025-11-01 15:19:36.233392 | orchestrator | 2025-11-01 15:19:33 | INFO  | Setting property image_name: Cirros 2025-11-01 15:19:36.233403 | orchestrator | 2025-11-01 15:19:34 | INFO  | Setting property internal_version: 0.6.3 2025-11-01 15:19:36.233413 | orchestrator | 2025-11-01 15:19:34 | INFO  | Setting property image_original_user: cirros 2025-11-01 15:19:36.233424 | orchestrator | 2025-11-01 15:19:34 | INFO  | Setting property os_version: 0.6.3 2025-11-01 15:19:36.233435 | orchestrator | 2025-11-01 15:19:34 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-11-01 15:19:36.233446 | orchestrator | 2025-11-01 15:19:35 | INFO  | Setting property image_build_date: 2024-09-26 2025-11-01 15:19:36.233462 | orchestrator | 2025-11-01 15:19:35 | INFO  | Checking status of 'Cirros 0.6.3' 2025-11-01 15:19:36.233473 | orchestrator | 2025-11-01 15:19:35 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-11-01 15:19:36.233484 | orchestrator | 2025-11-01 15:19:35 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-11-01 15:19:36.600063 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-11-01 15:19:38.987534 | orchestrator | 2025-11-01 15:19:38 | INFO  | date: 2025-11-01 2025-11-01 15:19:38.987628 | orchestrator | 2025-11-01 15:19:38 | INFO  | image: octavia-amphora-haproxy-2024.2.20251101.qcow2 2025-11-01 15:19:38.988039 | orchestrator | 2025-11-01 15:19:38 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251101.qcow2 2025-11-01 15:19:38.988360 | orchestrator | 2025-11-01 15:19:38 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251101.qcow2.CHECKSUM 2025-11-01 15:19:39.159393 | orchestrator | 2025-11-01 15:19:39 | INFO  | checksum: 665b63d55c855bb8158b5b9da75941485fad24fac81eb681f57aae95b3ea6c60 2025-11-01 15:19:39.238531 | orchestrator | 2025-11-01 15:19:39 | INFO  | It takes a moment until task 310d426d-b2f2-4ada-90c1-98e9d09baf32 (image-manager) has been started and output is visible here. 2025-11-01 15:20:51.364835 | orchestrator | 2025-11-01 15:19:41 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-11-01' 2025-11-01 15:20:51.364957 | orchestrator | 2025-11-01 15:19:41 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251101.qcow2: 200 2025-11-01 15:20:51.364977 | orchestrator | 2025-11-01 15:19:41 | INFO  | Importing image OpenStack Octavia Amphora 2025-11-01 2025-11-01 15:20:51.364992 | orchestrator | 2025-11-01 15:19:41 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251101.qcow2 2025-11-01 15:20:51.365005 | orchestrator | 2025-11-01 15:19:43 | INFO  | Waiting for image to leave queued state... 2025-11-01 15:20:51.365016 | orchestrator | 2025-11-01 15:19:45 | INFO  | Waiting for import to complete... 2025-11-01 15:20:51.365028 | orchestrator | 2025-11-01 15:19:55 | INFO  | Waiting for import to complete... 2025-11-01 15:20:51.365062 | orchestrator | 2025-11-01 15:20:05 | INFO  | Waiting for import to complete... 2025-11-01 15:20:51.365074 | orchestrator | 2025-11-01 15:20:15 | INFO  | Waiting for import to complete... 2025-11-01 15:20:51.365084 | orchestrator | 2025-11-01 15:20:25 | INFO  | Waiting for import to complete... 2025-11-01 15:20:51.365095 | orchestrator | 2025-11-01 15:20:35 | INFO  | Waiting for import to complete... 2025-11-01 15:20:51.365106 | orchestrator | 2025-11-01 15:20:45 | INFO  | Import of 'OpenStack Octavia Amphora 2025-11-01' successfully completed, reloading images 2025-11-01 15:20:51.365118 | orchestrator | 2025-11-01 15:20:46 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-11-01' 2025-11-01 15:20:51.365129 | orchestrator | 2025-11-01 15:20:46 | INFO  | Setting internal_version = 2025-11-01 2025-11-01 15:20:51.365140 | orchestrator | 2025-11-01 15:20:46 | INFO  | Setting image_original_user = ubuntu 2025-11-01 15:20:51.365151 | orchestrator | 2025-11-01 15:20:46 | INFO  | Adding tag amphora 2025-11-01 15:20:51.365162 | orchestrator | 2025-11-01 15:20:46 | INFO  | Adding tag os:ubuntu 2025-11-01 15:20:51.365173 | orchestrator | 2025-11-01 15:20:46 | INFO  | Setting property architecture: x86_64 2025-11-01 15:20:51.365184 | orchestrator | 2025-11-01 15:20:46 | INFO  | Setting property hw_disk_bus: scsi 2025-11-01 15:20:51.365194 | orchestrator | 2025-11-01 15:20:47 | INFO  | Setting property hw_rng_model: virtio 2025-11-01 15:20:51.365205 | orchestrator | 2025-11-01 15:20:47 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-11-01 15:20:51.365216 | orchestrator | 2025-11-01 15:20:47 | INFO  | Setting property hw_watchdog_action: reset 2025-11-01 15:20:51.365227 | orchestrator | 2025-11-01 15:20:47 | INFO  | Setting property hypervisor_type: qemu 2025-11-01 15:20:51.365254 | orchestrator | 2025-11-01 15:20:47 | INFO  | Setting property os_distro: ubuntu 2025-11-01 15:20:51.365265 | orchestrator | 2025-11-01 15:20:48 | INFO  | Setting property replace_frequency: quarterly 2025-11-01 15:20:51.365312 | orchestrator | 2025-11-01 15:20:48 | INFO  | Setting property uuid_validity: last-1 2025-11-01 15:20:51.365333 | orchestrator | 2025-11-01 15:20:48 | INFO  | Setting property provided_until: none 2025-11-01 15:20:51.365353 | orchestrator | 2025-11-01 15:20:48 | INFO  | Setting property os_purpose: network 2025-11-01 15:20:51.365372 | orchestrator | 2025-11-01 15:20:49 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-11-01 15:20:51.365388 | orchestrator | 2025-11-01 15:20:49 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-11-01 15:20:51.365400 | orchestrator | 2025-11-01 15:20:49 | INFO  | Setting property internal_version: 2025-11-01 2025-11-01 15:20:51.365412 | orchestrator | 2025-11-01 15:20:49 | INFO  | Setting property image_original_user: ubuntu 2025-11-01 15:20:51.365424 | orchestrator | 2025-11-01 15:20:50 | INFO  | Setting property os_version: 2025-11-01 2025-11-01 15:20:51.365437 | orchestrator | 2025-11-01 15:20:50 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251101.qcow2 2025-11-01 15:20:51.365449 | orchestrator | 2025-11-01 15:20:50 | INFO  | Setting property image_build_date: 2025-11-01 2025-11-01 15:20:51.365461 | orchestrator | 2025-11-01 15:20:50 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-11-01' 2025-11-01 15:20:51.365473 | orchestrator | 2025-11-01 15:20:50 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-11-01' 2025-11-01 15:20:51.365504 | orchestrator | 2025-11-01 15:20:51 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-11-01 15:20:51.365528 | orchestrator | 2025-11-01 15:20:51 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-11-01 15:20:51.365542 | orchestrator | 2025-11-01 15:20:51 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-11-01 15:20:51.365554 | orchestrator | 2025-11-01 15:20:51 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-11-01 15:20:52.108020 | orchestrator | ok: Runtime: 0:03:21.448268 2025-11-01 15:20:52.131855 | 2025-11-01 15:20:52.131985 | TASK [Run checks] 2025-11-01 15:20:52.815127 | orchestrator | + set -e 2025-11-01 15:20:52.815333 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-11-01 15:20:52.815355 | orchestrator | ++ export INTERACTIVE=false 2025-11-01 15:20:52.815370 | orchestrator | ++ INTERACTIVE=false 2025-11-01 15:20:52.815381 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-11-01 15:20:52.815390 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-11-01 15:20:52.815401 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-11-01 15:20:52.816578 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-11-01 15:20:52.822863 | orchestrator | 2025-11-01 15:20:52.822885 | orchestrator | # CHECK 2025-11-01 15:20:52.822895 | orchestrator | 2025-11-01 15:20:52.822905 | orchestrator | ++ export MANAGER_VERSION=latest 2025-11-01 15:20:52.822916 | orchestrator | ++ MANAGER_VERSION=latest 2025-11-01 15:20:52.822924 | orchestrator | + echo 2025-11-01 15:20:52.822932 | orchestrator | + echo '# CHECK' 2025-11-01 15:20:52.822940 | orchestrator | + echo 2025-11-01 15:20:52.823057 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-11-01 15:20:52.824306 | orchestrator | ++ semver latest 5.0.0 2025-11-01 15:20:52.889012 | orchestrator | 2025-11-01 15:20:52.889033 | orchestrator | ## Containers @ testbed-manager 2025-11-01 15:20:52.889043 | orchestrator | 2025-11-01 15:20:52.889052 | orchestrator | + [[ -1 -eq -1 ]] 2025-11-01 15:20:52.889060 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-11-01 15:20:52.889068 | orchestrator | + echo 2025-11-01 15:20:52.889077 | orchestrator | + echo '## Containers @ testbed-manager' 2025-11-01 15:20:52.889084 | orchestrator | + echo 2025-11-01 15:20:52.889092 | orchestrator | + osism container testbed-manager ps 2025-11-01 15:20:55.305055 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-11-01 15:20:55.305155 | orchestrator | 4e52a7f95e63 registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 56 minutes ago Up 56 minutes prometheus_blackbox_exporter 2025-11-01 15:20:55.305178 | orchestrator | 93b2d243d497 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 56 minutes ago Up 56 minutes prometheus_alertmanager 2025-11-01 15:20:55.305191 | orchestrator | 1dd0ed52644c registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 57 minutes ago Up 57 minutes prometheus_cadvisor 2025-11-01 15:20:55.305208 | orchestrator | bc8890d07c3f registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 57 minutes ago Up 57 minutes prometheus_node_exporter 2025-11-01 15:20:55.305220 | orchestrator | 76831ad43403 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 57 minutes ago Up 57 minutes prometheus_server 2025-11-01 15:20:55.305235 | orchestrator | b08562fdd03e registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" About an hour ago Up About an hour cephclient 2025-11-01 15:20:55.305247 | orchestrator | cec69d71a6c2 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" About an hour ago Up About an hour cron 2025-11-01 15:20:55.305259 | orchestrator | 61658cb00c8e registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" About an hour ago Up About an hour kolla_toolbox 2025-11-01 15:20:55.305270 | orchestrator | 83d1f4ea1d0e registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" About an hour ago Up About an hour fluentd 2025-11-01 15:20:55.305320 | orchestrator | c50b0676bde4 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" About an hour ago Up About an hour (healthy) 80/tcp phpmyadmin 2025-11-01 15:20:55.305333 | orchestrator | a5b31d7f1316 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" About an hour ago Up About an hour openstackclient 2025-11-01 15:20:55.305344 | orchestrator | 52af771fcea7 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" About an hour ago Up About an hour (healthy) 8080/tcp homer 2025-11-01 15:20:55.305355 | orchestrator | fa06bf4d67e7 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2025-11-01 15:20:55.305367 | orchestrator | 7abe9b62028c registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 2 hours ago Up About an hour (healthy) manager-inventory_reconciler-1 2025-11-01 15:20:55.305378 | orchestrator | de8c07fd916f registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 2 hours ago Up About an hour (healthy) kolla-ansible 2025-11-01 15:20:55.305408 | orchestrator | 146ea37e1135 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 2 hours ago Up About an hour (healthy) osism-ansible 2025-11-01 15:20:55.305425 | orchestrator | 786638d0747b registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 2 hours ago Up About an hour (healthy) osism-kubernetes 2025-11-01 15:20:55.305437 | orchestrator | 25b2e69c00e9 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 2 hours ago Up About an hour (healthy) ceph-ansible 2025-11-01 15:20:55.305448 | orchestrator | 4500dd324394 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up About an hour (healthy) 8000/tcp manager-ara-server-1 2025-11-01 15:20:55.305460 | orchestrator | b92423522c04 registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" 2 hours ago Up About an hour 192.168.16.5:3000->3000/tcp osism-frontend 2025-11-01 15:20:55.305471 | orchestrator | 6f5c10a556cf registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 2 hours ago Up About an hour (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-11-01 15:20:55.305482 | orchestrator | e3252b269688 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 2 hours ago Up About an hour (healthy) manager-flower-1 2025-11-01 15:20:55.305493 | orchestrator | 22b15b3089c1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 2 hours ago Up About an hour (healthy) manager-beat-1 2025-11-01 15:20:55.305513 | orchestrator | 83398984c26d registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 2 hours ago Up About an hour (healthy) osismclient 2025-11-01 15:20:55.305525 | orchestrator | ffd77f0c8b00 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 2 hours ago Up About an hour (healthy) manager-listener-1 2025-11-01 15:20:55.305536 | orchestrator | 121e43d1274f registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 2 hours ago Up About an hour (healthy) manager-openstack-1 2025-11-01 15:20:55.305548 | orchestrator | aaea087eed9d registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" 2 hours ago Up About an hour (healthy) 3306/tcp manager-mariadb-1 2025-11-01 15:20:55.305558 | orchestrator | bb2b8c4bd793 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" 2 hours ago Up About an hour (healthy) 6379/tcp manager-redis-1 2025-11-01 15:20:55.305570 | orchestrator | f2681312b9e7 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-11-01 15:20:55.653332 | orchestrator | 2025-11-01 15:20:55.653416 | orchestrator | ## Images @ testbed-manager 2025-11-01 15:20:55.653431 | orchestrator | 2025-11-01 15:20:55.653443 | orchestrator | + echo 2025-11-01 15:20:55.653455 | orchestrator | + echo '## Images @ testbed-manager' 2025-11-01 15:20:55.653467 | orchestrator | + echo 2025-11-01 15:20:55.653478 | orchestrator | + osism container testbed-manager images 2025-11-01 15:20:58.064271 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-11-01 15:20:58.064380 | orchestrator | registry.osism.tech/osism/osism latest 785ec9e82457 2 hours ago 323MB 2025-11-01 15:20:58.064394 | orchestrator | registry.osism.tech/osism/osism-frontend latest 23b002232069 2 hours ago 238MB 2025-11-01 15:20:58.064423 | orchestrator | registry.osism.tech/osism/homer v25.10.1 97ec70bd825b 12 hours ago 11.5MB 2025-11-01 15:20:58.064435 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 8ba85c7431b1 12 hours ago 236MB 2025-11-01 15:20:58.064446 | orchestrator | registry.osism.tech/osism/cephclient reef ff95829428ad 12 hours ago 453MB 2025-11-01 15:20:58.064457 | orchestrator | registry.osism.tech/kolla/cron 2024.2 eaa73375e046 13 hours ago 267MB 2025-11-01 15:20:58.064468 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 ce8a1ccf9781 13 hours ago 580MB 2025-11-01 15:20:58.064478 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 d0c82a0ec65c 13 hours ago 671MB 2025-11-01 15:20:58.064489 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 11fbf30a3486 13 hours ago 309MB 2025-11-01 15:20:58.064500 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 d674d1069289 13 hours ago 307MB 2025-11-01 15:20:58.064511 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 1478fa905298 13 hours ago 358MB 2025-11-01 15:20:58.064521 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 8b134aefa73f 13 hours ago 840MB 2025-11-01 15:20:58.064533 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 b8f395f83943 13 hours ago 405MB 2025-11-01 15:20:58.064543 | orchestrator | registry.osism.tech/osism/osism-ansible latest fb95637d6084 15 hours ago 597MB 2025-11-01 15:20:58.064571 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 f9cd9a3567f2 15 hours ago 593MB 2025-11-01 15:20:58.064582 | orchestrator | registry.osism.tech/osism/ceph-ansible reef f76c3643e07b 15 hours ago 545MB 2025-11-01 15:20:58.064593 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest b83a70ae01c7 15 hours ago 1.21GB 2025-11-01 15:20:58.064604 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 5651c69c70d7 15 hours ago 316MB 2025-11-01 15:20:58.064615 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 3 weeks ago 742MB 2025-11-01 15:20:58.064626 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 2 months ago 275MB 2025-11-01 15:20:58.064636 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.3 ea44c9edeacf 2 months ago 329MB 2025-11-01 15:20:58.064647 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 3 months ago 226MB 2025-11-01 15:20:58.064657 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.5-alpine f218e591b571 3 months ago 41.4MB 2025-11-01 15:20:58.064668 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 16 months ago 146MB 2025-11-01 15:20:58.386495 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-11-01 15:20:58.386735 | orchestrator | ++ semver latest 5.0.0 2025-11-01 15:20:58.442102 | orchestrator | 2025-11-01 15:20:58.442162 | orchestrator | ## Containers @ testbed-node-0 2025-11-01 15:20:58.442176 | orchestrator | 2025-11-01 15:20:58.442188 | orchestrator | + [[ -1 -eq -1 ]] 2025-11-01 15:20:58.442199 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-11-01 15:20:58.442209 | orchestrator | + echo 2025-11-01 15:20:58.442220 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-11-01 15:20:58.442231 | orchestrator | + echo 2025-11-01 15:20:58.442242 | orchestrator | + osism container testbed-node-0 ps 2025-11-01 15:21:01.435013 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-11-01 15:21:01.435119 | orchestrator | 892665d8b92c registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) octavia_worker 2025-11-01 15:21:01.435143 | orchestrator | 9f62814bd6df registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) octavia_housekeeping 2025-11-01 15:21:01.435162 | orchestrator | 19bc6fe34654 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) octavia_health_manager 2025-11-01 15:21:01.435181 | orchestrator | 34dfe0856950 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 46 minutes ago Up 46 minutes octavia_driver_agent 2025-11-01 15:21:01.435198 | orchestrator | 12e668043bfb registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 47 minutes ago Up 47 minutes (healthy) octavia_api 2025-11-01 15:21:01.435238 | orchestrator | bc00d7f02b44 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 50 minutes ago Up 50 minutes (healthy) nova_novncproxy 2025-11-01 15:21:01.435257 | orchestrator | d27f7736a2af registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 50 minutes ago Up 50 minutes (healthy) nova_conductor 2025-11-01 15:21:01.435306 | orchestrator | d8b715afb70c registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) nova_api 2025-11-01 15:21:01.435327 | orchestrator | f53845ca3f0c registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) nova_scheduler 2025-11-01 15:21:01.435370 | orchestrator | b3d96eda1edf registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 53 minutes ago Up 53 minutes grafana 2025-11-01 15:21:01.435389 | orchestrator | 343a0266182a registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) glance_api 2025-11-01 15:21:01.435408 | orchestrator | ef77bd392660 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) cinder_scheduler 2025-11-01 15:21:01.435427 | orchestrator | 5e3ee3aa0e2a registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) cinder_api 2025-11-01 15:21:01.435446 | orchestrator | 3e3105675a55 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 56 minutes ago Up 56 minutes prometheus_elasticsearch_exporter 2025-11-01 15:21:01.435466 | orchestrator | ae0a0669d56d registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 57 minutes ago Up 57 minutes prometheus_cadvisor 2025-11-01 15:21:01.435485 | orchestrator | 884a2c310ff8 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 57 minutes ago Up 57 minutes prometheus_memcached_exporter 2025-11-01 15:21:01.435504 | orchestrator | ce50bfdd264e registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 57 minutes ago Up 57 minutes prometheus_mysqld_exporter 2025-11-01 15:21:01.435522 | orchestrator | 5b7173e50fb5 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 57 minutes ago Up 57 minutes prometheus_node_exporter 2025-11-01 15:21:01.435541 | orchestrator | c0fdd24728d6 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 58 minutes ago Up 58 minutes (healthy) magnum_conductor 2025-11-01 15:21:01.435560 | orchestrator | fb2d1707f521 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 58 minutes ago Up 58 minutes (healthy) magnum_api 2025-11-01 15:21:01.435599 | orchestrator | b27c82909daa registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) neutron_server 2025-11-01 15:21:01.435619 | orchestrator | 9f6f57a9c744 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) placement_api 2025-11-01 15:21:01.435636 | orchestrator | 76881dc105f1 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) designate_worker 2025-11-01 15:21:01.435652 | orchestrator | a6270f32846d registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) designate_mdns 2025-11-01 15:21:01.435675 | orchestrator | 2967216d0c6e registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) designate_producer 2025-11-01 15:21:01.435692 | orchestrator | d95fdace3e6a registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) designate_central 2025-11-01 15:21:01.435713 | orchestrator | 305cd2647805 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) designate_api 2025-11-01 15:21:01.435730 | orchestrator | 15b060788a45 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) designate_backend_bind9 2025-11-01 15:21:01.435756 | orchestrator | 88046b88600d registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) barbican_worker 2025-11-01 15:21:01.435773 | orchestrator | 1c0369cb2375 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) barbican_keystone_listener 2025-11-01 15:21:01.435789 | orchestrator | 90b090c49249 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) barbican_api 2025-11-01 15:21:01.435806 | orchestrator | 72f2c21abb6a registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" About an hour ago Up About an hour ceph-mgr-testbed-node-0 2025-11-01 15:21:01.435823 | orchestrator | 800aa849fc19 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone 2025-11-01 15:21:01.435839 | orchestrator | fddcf3cf7e3a registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_fernet 2025-11-01 15:21:01.435856 | orchestrator | b5662fb12fe8 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_ssh 2025-11-01 15:21:01.435873 | orchestrator | f6d27d021443 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) horizon 2025-11-01 15:21:01.435890 | orchestrator | 03f5a12a4051 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2025-11-01 15:21:01.435906 | orchestrator | da168791cca8 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2025-11-01 15:21:01.435922 | orchestrator | e0ecd60129a9 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2025-11-01 15:21:01.435939 | orchestrator | fef1b2c71562 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2025-11-01 15:21:01.435955 | orchestrator | f62e2fafb891 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2025-11-01 15:21:01.435972 | orchestrator | cb6d9ff1ea0a registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2025-11-01 15:21:01.435997 | orchestrator | fb020d589157 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2025-11-01 15:21:01.436014 | orchestrator | 3f3e8c0f3332 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2025-11-01 15:21:01.436031 | orchestrator | 88fb55aa2684 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2025-11-01 15:21:01.436047 | orchestrator | 88507f42ab1f registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2025-11-01 15:21:01.436064 | orchestrator | de7733108cd5 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2025-11-01 15:21:01.436090 | orchestrator | 7eea03dc1047 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2025-11-01 15:21:01.436113 | orchestrator | 4a46c0f3101f registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2025-11-01 15:21:01.436130 | orchestrator | a89062b9fbc8 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2025-11-01 15:21:01.436147 | orchestrator | 754eb9c5748a registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2025-11-01 15:21:01.436163 | orchestrator | bd6a228ec163 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2025-11-01 15:21:01.436180 | orchestrator | ed1482d525a3 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2025-11-01 15:21:01.436197 | orchestrator | 1022423ad324 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2025-11-01 15:21:01.436213 | orchestrator | 36f41fdc3e20 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" About an hour ago Up About an hour cron 2025-11-01 15:21:01.436229 | orchestrator | c746e2f054ae registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" About an hour ago Up About an hour kolla_toolbox 2025-11-01 15:21:01.436246 | orchestrator | 37980dfa5f0d registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" About an hour ago Up About an hour fluentd 2025-11-01 15:21:01.763874 | orchestrator | 2025-11-01 15:21:01.763954 | orchestrator | ## Images @ testbed-node-0 2025-11-01 15:21:01.763970 | orchestrator | 2025-11-01 15:21:01.763982 | orchestrator | + echo 2025-11-01 15:21:01.763993 | orchestrator | + echo '## Images @ testbed-node-0' 2025-11-01 15:21:01.764005 | orchestrator | + echo 2025-11-01 15:21:01.764016 | orchestrator | + osism container testbed-node-0 images 2025-11-01 15:21:04.282826 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-11-01 15:21:04.282926 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 44f898f2e9b3 12 hours ago 1.27GB 2025-11-01 15:21:04.282942 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 a500281cdb12 13 hours ago 394MB 2025-11-01 15:21:04.282953 | orchestrator | registry.osism.tech/kolla/cron 2024.2 eaa73375e046 13 hours ago 267MB 2025-11-01 15:21:04.282964 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 3370d7cde2dc 13 hours ago 1GB 2025-11-01 15:21:04.282975 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 ce8a1ccf9781 13 hours ago 580MB 2025-11-01 15:21:04.282986 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 f2eec3862497 13 hours ago 278MB 2025-11-01 15:21:04.282997 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 3d25be654b17 13 hours ago 275MB 2025-11-01 15:21:04.283008 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 c73bff14dba3 13 hours ago 324MB 2025-11-01 15:21:04.283039 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 308dc86a2ead 13 hours ago 1.54GB 2025-11-01 15:21:04.283051 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 909a2a43a28b 13 hours ago 1.51GB 2025-11-01 15:21:04.283062 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 d0c82a0ec65c 13 hours ago 671MB 2025-11-01 15:21:04.283097 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 85001988322c 13 hours ago 267MB 2025-11-01 15:21:04.283109 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 155e1f38ae11 13 hours ago 449MB 2025-11-01 15:21:04.283120 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 eb0992b53bde 13 hours ago 293MB 2025-11-01 15:21:04.283131 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 d674d1069289 13 hours ago 307MB 2025-11-01 15:21:04.283142 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 01facc12637c 13 hours ago 302MB 2025-11-01 15:21:04.283153 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 1478fa905298 13 hours ago 358MB 2025-11-01 15:21:04.284090 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 b5ab78f4ac8f 13 hours ago 300MB 2025-11-01 15:21:04.284111 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 0e66e32d606a 13 hours ago 1.15GB 2025-11-01 15:21:04.284122 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 d5aa73840e10 13 hours ago 274MB 2025-11-01 15:21:04.284133 | orchestrator | registry.osism.tech/kolla/redis 2024.2 c4b13aebd387 13 hours ago 274MB 2025-11-01 15:21:04.284144 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 0fc4ae24ea0e 13 hours ago 280MB 2025-11-01 15:21:04.284155 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 aa9fcdd9c97e 13 hours ago 280MB 2025-11-01 15:21:04.284166 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 765dfa6912f5 13 hours ago 977MB 2025-11-01 15:21:04.284177 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 57373846f77c 13 hours ago 990MB 2025-11-01 15:21:04.284188 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 6ac3d58ce359 13 hours ago 986MB 2025-11-01 15:21:04.284198 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 f11ee4a81645 13 hours ago 985MB 2025-11-01 15:21:04.284209 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 a262eee2014d 13 hours ago 986MB 2025-11-01 15:21:04.284220 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 b21b61bdd6f3 13 hours ago 986MB 2025-11-01 15:21:04.284231 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 b387cac5b723 13 hours ago 990MB 2025-11-01 15:21:04.284242 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 fc49ef6499ee 13 hours ago 1.1GB 2025-11-01 15:21:04.284253 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 c7ad4d47bf86 13 hours ago 992MB 2025-11-01 15:21:04.284263 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 99f2c51e3b82 13 hours ago 991MB 2025-11-01 15:21:04.284274 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 0ea7bd5c41df 13 hours ago 992MB 2025-11-01 15:21:04.284335 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 c544fb2ad50e 13 hours ago 1.16GB 2025-11-01 15:21:04.284346 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 3aed99d10b53 13 hours ago 1.4GB 2025-11-01 15:21:04.284358 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 989948c1427d 13 hours ago 1.4GB 2025-11-01 15:21:04.284369 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 dff5bfd72644 13 hours ago 975MB 2025-11-01 15:21:04.284380 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 0ffe873f9568 13 hours ago 975MB 2025-11-01 15:21:04.284390 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 3b0fa753bca7 13 hours ago 975MB 2025-11-01 15:21:04.284401 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 ffc5292b586f 13 hours ago 974MB 2025-11-01 15:21:04.284423 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 508223a03f2a 13 hours ago 1.13GB 2025-11-01 15:21:04.284434 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 38bab00d6975 13 hours ago 1.24GB 2025-11-01 15:21:04.284445 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 bae5a17370e4 13 hours ago 1.04GB 2025-11-01 15:21:04.284456 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 f333080e0f8b 13 hours ago 1.09GB 2025-11-01 15:21:04.284467 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 b9d44a046aa8 13 hours ago 1.04GB 2025-11-01 15:21:04.284478 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 6e37d25838f3 13 hours ago 978MB 2025-11-01 15:21:04.284489 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 820bdc545666 13 hours ago 977MB 2025-11-01 15:21:04.284499 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 ee77aaf10e87 13 hours ago 1.05GB 2025-11-01 15:21:04.284510 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 a160aa4e4243 13 hours ago 991MB 2025-11-01 15:21:04.284521 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 ad98f2eeffe7 13 hours ago 1.05GB 2025-11-01 15:21:04.284532 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 c3d1f1672d13 13 hours ago 1.03GB 2025-11-01 15:21:04.284543 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 178f20b77a9f 13 hours ago 1.05GB 2025-11-01 15:21:04.284554 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 82f8f3aaed83 13 hours ago 1.03GB 2025-11-01 15:21:04.284565 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 4131f85c01c8 13 hours ago 1.03GB 2025-11-01 15:21:04.284586 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 bd151c181d88 13 hours ago 1.21GB 2025-11-01 15:21:04.284597 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 b1a4ecd753b1 13 hours ago 1.21GB 2025-11-01 15:21:04.284608 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 95866e00ca40 13 hours ago 1.37GB 2025-11-01 15:21:04.284619 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 39fb338b4a2c 13 hours ago 1.21GB 2025-11-01 15:21:04.284630 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 421bf1ebb80e 13 hours ago 841MB 2025-11-01 15:21:04.284641 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 30d4f67a1924 13 hours ago 841MB 2025-11-01 15:21:04.284652 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 822cd9e7eadc 13 hours ago 841MB 2025-11-01 15:21:04.284663 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 7c4750c0829d 13 hours ago 841MB 2025-11-01 15:21:04.623405 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-11-01 15:21:04.623834 | orchestrator | ++ semver latest 5.0.0 2025-11-01 15:21:04.689346 | orchestrator | 2025-11-01 15:21:04.689388 | orchestrator | ## Containers @ testbed-node-1 2025-11-01 15:21:04.689400 | orchestrator | 2025-11-01 15:21:04.689412 | orchestrator | + [[ -1 -eq -1 ]] 2025-11-01 15:21:04.689423 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-11-01 15:21:04.689434 | orchestrator | + echo 2025-11-01 15:21:04.689445 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-11-01 15:21:04.689456 | orchestrator | + echo 2025-11-01 15:21:04.689467 | orchestrator | + osism container testbed-node-1 ps 2025-11-01 15:21:07.609150 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-11-01 15:21:07.609270 | orchestrator | 64aed748a1b8 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) octavia_worker 2025-11-01 15:21:07.609358 | orchestrator | 2d94fd7def2a registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) octavia_housekeeping 2025-11-01 15:21:07.609371 | orchestrator | 767baf80e5ec registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) octavia_health_manager 2025-11-01 15:21:07.609381 | orchestrator | ac7363dd9ff6 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 46 minutes ago Up 46 minutes octavia_driver_agent 2025-11-01 15:21:07.609392 | orchestrator | dc9379d930cb registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 47 minutes ago Up 47 minutes (healthy) octavia_api 2025-11-01 15:21:07.609403 | orchestrator | 06cdc6f5a455 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 50 minutes ago Up 49 minutes (healthy) nova_novncproxy 2025-11-01 15:21:07.609414 | orchestrator | 18013179689e registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 50 minutes ago Up 50 minutes (healthy) nova_conductor 2025-11-01 15:21:07.609424 | orchestrator | 1fadf31dfa9c registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) nova_api 2025-11-01 15:21:07.609435 | orchestrator | 4380b5d151fb registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 52 minutes ago Up 51 minutes grafana 2025-11-01 15:21:07.609446 | orchestrator | be14164f088e registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 52 minutes ago Up 51 minutes (healthy) nova_scheduler 2025-11-01 15:21:07.609457 | orchestrator | 05b8ca6bd70d registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) glance_api 2025-11-01 15:21:07.609468 | orchestrator | fb3befd00273 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) cinder_scheduler 2025-11-01 15:21:07.609478 | orchestrator | 469ea551ab45 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) cinder_api 2025-11-01 15:21:07.609494 | orchestrator | 4d063e540c28 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 56 minutes ago Up 56 minutes prometheus_elasticsearch_exporter 2025-11-01 15:21:07.609506 | orchestrator | 1c3aa45861cf registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 57 minutes ago Up 57 minutes prometheus_cadvisor 2025-11-01 15:21:07.609517 | orchestrator | a5bcd1b0351a registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 57 minutes ago Up 57 minutes prometheus_memcached_exporter 2025-11-01 15:21:07.609529 | orchestrator | a9d7661a847a registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 57 minutes ago Up 57 minutes prometheus_mysqld_exporter 2025-11-01 15:21:07.609539 | orchestrator | e6a1ca92fb0f registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 57 minutes ago Up 57 minutes prometheus_node_exporter 2025-11-01 15:21:07.609551 | orchestrator | 42c861304c32 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 58 minutes ago Up 58 minutes (healthy) magnum_conductor 2025-11-01 15:21:07.609562 | orchestrator | 2ca76207d436 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 58 minutes ago Up 58 minutes (healthy) magnum_api 2025-11-01 15:21:07.609602 | orchestrator | b0c8c6661074 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) neutron_server 2025-11-01 15:21:07.609615 | orchestrator | 42d7a63719f6 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) placement_api 2025-11-01 15:21:07.609626 | orchestrator | 15e6607c4a4f registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) designate_worker 2025-11-01 15:21:07.609637 | orchestrator | efda197a964d registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) designate_mdns 2025-11-01 15:21:07.609648 | orchestrator | 7d2875857812 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) designate_producer 2025-11-01 15:21:07.609659 | orchestrator | 735bbd6fda34 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) designate_central 2025-11-01 15:21:07.609669 | orchestrator | d7a1735365b7 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) designate_api 2025-11-01 15:21:07.609680 | orchestrator | 812c379b1d38 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) designate_backend_bind9 2025-11-01 15:21:07.609693 | orchestrator | 9f96b1fcf654 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) barbican_worker 2025-11-01 15:21:07.609706 | orchestrator | 7350c7cfbb02 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) barbican_keystone_listener 2025-11-01 15:21:07.609718 | orchestrator | 5b5ebf693346 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) barbican_api 2025-11-01 15:21:07.609730 | orchestrator | 9ff265f98a73 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" About an hour ago Up About an hour ceph-mgr-testbed-node-1 2025-11-01 15:21:07.609743 | orchestrator | 4913f0ab9a95 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone 2025-11-01 15:21:07.609755 | orchestrator | 5e247f735f00 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_fernet 2025-11-01 15:21:07.609768 | orchestrator | 877169a6dabd registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) horizon 2025-11-01 15:21:07.609780 | orchestrator | 98d4a4bb79f1 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_ssh 2025-11-01 15:21:07.609793 | orchestrator | 1ecb54ac27c2 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2025-11-01 15:21:07.609805 | orchestrator | 9d9954f97391 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2025-11-01 15:21:07.609818 | orchestrator | cf9b4bad8314 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2025-11-01 15:21:07.609837 | orchestrator | d766601a504b registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2025-11-01 15:21:07.609849 | orchestrator | a28fe066dc50 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2025-11-01 15:21:07.609862 | orchestrator | 8181fd3b55b4 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2025-11-01 15:21:07.609880 | orchestrator | 4b7d27b1c2b9 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2025-11-01 15:21:07.609898 | orchestrator | 3bbbf8909479 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2025-11-01 15:21:07.609916 | orchestrator | 6cde033e0379 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2025-11-01 15:21:07.609929 | orchestrator | ad95d5b9d6e0 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2025-11-01 15:21:07.609941 | orchestrator | 5537e857cfc2 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2025-11-01 15:21:07.609953 | orchestrator | 46618e209c6e registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2025-11-01 15:21:07.609966 | orchestrator | cc092727f209 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2025-11-01 15:21:07.609978 | orchestrator | a0ad8381f3c9 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2025-11-01 15:21:07.609990 | orchestrator | d08c605f479e registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2025-11-01 15:21:07.610003 | orchestrator | 026d581c5815 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2025-11-01 15:21:07.610076 | orchestrator | 7982c32cc934 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2025-11-01 15:21:07.610089 | orchestrator | b6d0c4b1d906 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2025-11-01 15:21:07.610100 | orchestrator | e3b94a7081fb registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" About an hour ago Up About an hour cron 2025-11-01 15:21:07.610112 | orchestrator | 4e40bf99b8a8 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" About an hour ago Up About an hour kolla_toolbox 2025-11-01 15:21:07.610123 | orchestrator | 4089fedee730 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" About an hour ago Up About an hour fluentd 2025-11-01 15:21:07.945583 | orchestrator | 2025-11-01 15:21:07.945634 | orchestrator | ## Images @ testbed-node-1 2025-11-01 15:21:07.945647 | orchestrator | 2025-11-01 15:21:07.945658 | orchestrator | + echo 2025-11-01 15:21:07.945669 | orchestrator | + echo '## Images @ testbed-node-1' 2025-11-01 15:21:07.945682 | orchestrator | + echo 2025-11-01 15:21:07.945693 | orchestrator | + osism container testbed-node-1 images 2025-11-01 15:21:10.420544 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-11-01 15:21:10.420643 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 44f898f2e9b3 12 hours ago 1.27GB 2025-11-01 15:21:10.420657 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 a500281cdb12 13 hours ago 394MB 2025-11-01 15:21:10.420668 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 3370d7cde2dc 13 hours ago 1GB 2025-11-01 15:21:10.420679 | orchestrator | registry.osism.tech/kolla/cron 2024.2 eaa73375e046 13 hours ago 267MB 2025-11-01 15:21:10.420690 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 ce8a1ccf9781 13 hours ago 580MB 2025-11-01 15:21:10.420701 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 f2eec3862497 13 hours ago 278MB 2025-11-01 15:21:10.420712 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 3d25be654b17 13 hours ago 275MB 2025-11-01 15:21:10.420723 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 c73bff14dba3 13 hours ago 324MB 2025-11-01 15:21:10.420734 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 909a2a43a28b 13 hours ago 1.51GB 2025-11-01 15:21:10.420745 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 308dc86a2ead 13 hours ago 1.54GB 2025-11-01 15:21:10.420755 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 d0c82a0ec65c 13 hours ago 671MB 2025-11-01 15:21:10.420766 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 85001988322c 13 hours ago 267MB 2025-11-01 15:21:10.420777 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 155e1f38ae11 13 hours ago 449MB 2025-11-01 15:21:10.420787 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 eb0992b53bde 13 hours ago 293MB 2025-11-01 15:21:10.420798 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 d674d1069289 13 hours ago 307MB 2025-11-01 15:21:10.420809 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 01facc12637c 13 hours ago 302MB 2025-11-01 15:21:10.420820 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 1478fa905298 13 hours ago 358MB 2025-11-01 15:21:10.420831 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 b5ab78f4ac8f 13 hours ago 300MB 2025-11-01 15:21:10.420841 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 0e66e32d606a 13 hours ago 1.15GB 2025-11-01 15:21:10.420871 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 d5aa73840e10 13 hours ago 274MB 2025-11-01 15:21:10.420882 | orchestrator | registry.osism.tech/kolla/redis 2024.2 c4b13aebd387 13 hours ago 274MB 2025-11-01 15:21:10.420893 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 0fc4ae24ea0e 13 hours ago 280MB 2025-11-01 15:21:10.420904 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 aa9fcdd9c97e 13 hours ago 280MB 2025-11-01 15:21:10.420914 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 765dfa6912f5 13 hours ago 977MB 2025-11-01 15:21:10.420925 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 57373846f77c 13 hours ago 990MB 2025-11-01 15:21:10.420936 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 6ac3d58ce359 13 hours ago 986MB 2025-11-01 15:21:10.420946 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 f11ee4a81645 13 hours ago 985MB 2025-11-01 15:21:10.420957 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 a262eee2014d 13 hours ago 986MB 2025-11-01 15:21:10.420968 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 b21b61bdd6f3 13 hours ago 986MB 2025-11-01 15:21:10.421001 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 b387cac5b723 13 hours ago 990MB 2025-11-01 15:21:10.421012 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 fc49ef6499ee 13 hours ago 1.1GB 2025-11-01 15:21:10.421023 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 c7ad4d47bf86 13 hours ago 992MB 2025-11-01 15:21:10.421034 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 99f2c51e3b82 13 hours ago 991MB 2025-11-01 15:21:10.421044 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 0ea7bd5c41df 13 hours ago 992MB 2025-11-01 15:21:10.421055 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 c544fb2ad50e 13 hours ago 1.16GB 2025-11-01 15:21:10.421066 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 3aed99d10b53 13 hours ago 1.4GB 2025-11-01 15:21:10.421100 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 989948c1427d 13 hours ago 1.4GB 2025-11-01 15:21:10.421115 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 508223a03f2a 13 hours ago 1.13GB 2025-11-01 15:21:10.421127 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 38bab00d6975 13 hours ago 1.24GB 2025-11-01 15:21:10.421140 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 bae5a17370e4 13 hours ago 1.04GB 2025-11-01 15:21:10.421152 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 f333080e0f8b 13 hours ago 1.09GB 2025-11-01 15:21:10.421164 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 b9d44a046aa8 13 hours ago 1.04GB 2025-11-01 15:21:10.421177 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 ad98f2eeffe7 13 hours ago 1.05GB 2025-11-01 15:21:10.421188 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 c3d1f1672d13 13 hours ago 1.03GB 2025-11-01 15:21:10.421200 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 178f20b77a9f 13 hours ago 1.05GB 2025-11-01 15:21:10.421212 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 82f8f3aaed83 13 hours ago 1.03GB 2025-11-01 15:21:10.421224 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 4131f85c01c8 13 hours ago 1.03GB 2025-11-01 15:21:10.421236 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 bd151c181d88 13 hours ago 1.21GB 2025-11-01 15:21:10.421249 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 b1a4ecd753b1 13 hours ago 1.21GB 2025-11-01 15:21:10.421261 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 95866e00ca40 13 hours ago 1.37GB 2025-11-01 15:21:10.421274 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 39fb338b4a2c 13 hours ago 1.21GB 2025-11-01 15:21:10.421312 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 421bf1ebb80e 13 hours ago 841MB 2025-11-01 15:21:10.421325 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 30d4f67a1924 13 hours ago 841MB 2025-11-01 15:21:10.421337 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 822cd9e7eadc 13 hours ago 841MB 2025-11-01 15:21:10.421350 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 7c4750c0829d 13 hours ago 841MB 2025-11-01 15:21:10.795971 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-11-01 15:21:10.796219 | orchestrator | ++ semver latest 5.0.0 2025-11-01 15:21:10.858678 | orchestrator | 2025-11-01 15:21:10.858707 | orchestrator | ## Containers @ testbed-node-2 2025-11-01 15:21:10.858719 | orchestrator | 2025-11-01 15:21:10.858730 | orchestrator | + [[ -1 -eq -1 ]] 2025-11-01 15:21:10.858741 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-11-01 15:21:10.858751 | orchestrator | + echo 2025-11-01 15:21:10.858763 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-11-01 15:21:10.858797 | orchestrator | + echo 2025-11-01 15:21:10.858808 | orchestrator | + osism container testbed-node-2 ps 2025-11-01 15:21:13.672504 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-11-01 15:21:13.672592 | orchestrator | d149776cd609 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) octavia_worker 2025-11-01 15:21:13.672606 | orchestrator | 65c5923cdb6c registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) octavia_housekeeping 2025-11-01 15:21:13.672618 | orchestrator | f0c53e964813 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) octavia_health_manager 2025-11-01 15:21:13.672629 | orchestrator | 77b77b9f9b6f registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 46 minutes ago Up 46 minutes octavia_driver_agent 2025-11-01 15:21:13.672640 | orchestrator | f2f714089afb registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 47 minutes ago Up 47 minutes (healthy) octavia_api 2025-11-01 15:21:13.672670 | orchestrator | 7ac776d54c4d registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 50 minutes ago Up 50 minutes (healthy) nova_novncproxy 2025-11-01 15:21:13.672681 | orchestrator | fef9e4f383a9 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 50 minutes ago Up 50 minutes (healthy) nova_conductor 2025-11-01 15:21:13.672692 | orchestrator | 09053fbc727a registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 52 minutes ago Up 51 minutes (healthy) nova_api 2025-11-01 15:21:13.672703 | orchestrator | cf69effd4511 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) nova_scheduler 2025-11-01 15:21:13.672714 | orchestrator | 635b8128982a registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 52 minutes ago Up 52 minutes grafana 2025-11-01 15:21:13.672725 | orchestrator | 211f41cf6b7c registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) glance_api 2025-11-01 15:21:13.672736 | orchestrator | dfc9e7c30dac registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) cinder_scheduler 2025-11-01 15:21:13.672746 | orchestrator | 13b27f2ecc91 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) cinder_api 2025-11-01 15:21:13.672757 | orchestrator | e127e92cb4ff registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 57 minutes ago Up 57 minutes prometheus_elasticsearch_exporter 2025-11-01 15:21:13.672769 | orchestrator | ab421f83ba12 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 57 minutes ago Up 57 minutes prometheus_cadvisor 2025-11-01 15:21:13.672780 | orchestrator | 43a337bca971 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 57 minutes ago Up 57 minutes prometheus_memcached_exporter 2025-11-01 15:21:13.672791 | orchestrator | d9e64afaa5b1 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 57 minutes ago Up 57 minutes prometheus_mysqld_exporter 2025-11-01 15:21:13.672802 | orchestrator | 7e8b5e30bd7d registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 57 minutes ago Up 57 minutes prometheus_node_exporter 2025-11-01 15:21:13.672834 | orchestrator | 4b4861b6d682 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 58 minutes ago Up 58 minutes (healthy) magnum_conductor 2025-11-01 15:21:13.672845 | orchestrator | 6d3a4518f956 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 58 minutes ago Up 58 minutes (healthy) magnum_api 2025-11-01 15:21:13.672872 | orchestrator | b2a1a7b40497 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) neutron_server 2025-11-01 15:21:13.672883 | orchestrator | d6a5862c9d13 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) placement_api 2025-11-01 15:21:13.672895 | orchestrator | d9caa86773b7 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) designate_worker 2025-11-01 15:21:13.672906 | orchestrator | 579fa64c0e03 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) designate_mdns 2025-11-01 15:21:13.672917 | orchestrator | fc5a4de5ae9c registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) designate_producer 2025-11-01 15:21:13.672927 | orchestrator | 5b04fa8cfd98 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) designate_central 2025-11-01 15:21:13.672938 | orchestrator | 74a6b84c377a registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) designate_api 2025-11-01 15:21:13.672949 | orchestrator | b47b45594db0 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) designate_backend_bind9 2025-11-01 15:21:13.672960 | orchestrator | 941457aac174 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) barbican_worker 2025-11-01 15:21:13.672970 | orchestrator | 2b133bceac84 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) barbican_keystone_listener 2025-11-01 15:21:13.672981 | orchestrator | 893ed9c0bc06 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) barbican_api 2025-11-01 15:21:13.672992 | orchestrator | 89c09d2770b6 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" About an hour ago Up About an hour ceph-mgr-testbed-node-2 2025-11-01 15:21:13.673003 | orchestrator | 5e5b1093cd9f registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone 2025-11-01 15:21:13.673013 | orchestrator | db58906e531b registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_fernet 2025-11-01 15:21:13.673024 | orchestrator | 48831fe8dc64 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) horizon 2025-11-01 15:21:13.673035 | orchestrator | 9b36912d07fe registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_ssh 2025-11-01 15:21:13.673046 | orchestrator | a791a9299b6e registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2025-11-01 15:21:13.673066 | orchestrator | 6ba7aac37440 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2025-11-01 15:21:13.673092 | orchestrator | 979c03dba4c2 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2025-11-01 15:21:13.673109 | orchestrator | 3898bef726fa registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2025-11-01 15:21:13.673121 | orchestrator | 313c0ab2c2df registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2025-11-01 15:21:13.673133 | orchestrator | 1dd1b294da7c registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2025-11-01 15:21:13.673151 | orchestrator | 72bcba7ad277 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2025-11-01 15:21:13.673163 | orchestrator | 9682e7c84640 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2025-11-01 15:21:13.673175 | orchestrator | d4bff196c611 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2025-11-01 15:21:13.673188 | orchestrator | 2184c3cdad78 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2025-11-01 15:21:13.673199 | orchestrator | a014f15841be registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2025-11-01 15:21:13.673211 | orchestrator | 09d40f0fdeb9 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2025-11-01 15:21:13.673223 | orchestrator | 896120e1af93 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2025-11-01 15:21:13.673236 | orchestrator | 0f8cd901dadc registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2025-11-01 15:21:13.673248 | orchestrator | 015f837b411f registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2025-11-01 15:21:13.673260 | orchestrator | e13430165a34 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2025-11-01 15:21:13.673273 | orchestrator | 9da379f62046 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2025-11-01 15:21:13.673307 | orchestrator | b53b4ffba5db registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2025-11-01 15:21:13.673321 | orchestrator | 09224c3d2620 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" About an hour ago Up About an hour cron 2025-11-01 15:21:13.673333 | orchestrator | 45d8f6506f0a registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" About an hour ago Up About an hour kolla_toolbox 2025-11-01 15:21:13.673352 | orchestrator | 5c480d06f162 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" About an hour ago Up About an hour fluentd 2025-11-01 15:21:14.005013 | orchestrator | 2025-11-01 15:21:14.005063 | orchestrator | ## Images @ testbed-node-2 2025-11-01 15:21:14.005076 | orchestrator | 2025-11-01 15:21:14.005088 | orchestrator | + echo 2025-11-01 15:21:14.005099 | orchestrator | + echo '## Images @ testbed-node-2' 2025-11-01 15:21:14.005112 | orchestrator | + echo 2025-11-01 15:21:14.005124 | orchestrator | + osism container testbed-node-2 images 2025-11-01 15:21:16.553339 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-11-01 15:21:16.553437 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 44f898f2e9b3 12 hours ago 1.27GB 2025-11-01 15:21:16.553451 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 a500281cdb12 13 hours ago 394MB 2025-11-01 15:21:16.553463 | orchestrator | registry.osism.tech/kolla/cron 2024.2 eaa73375e046 13 hours ago 267MB 2025-11-01 15:21:16.553474 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 3370d7cde2dc 13 hours ago 1GB 2025-11-01 15:21:16.553485 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 ce8a1ccf9781 13 hours ago 580MB 2025-11-01 15:21:16.553495 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 3d25be654b17 13 hours ago 275MB 2025-11-01 15:21:16.553506 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 f2eec3862497 13 hours ago 278MB 2025-11-01 15:21:16.553517 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 c73bff14dba3 13 hours ago 324MB 2025-11-01 15:21:16.553527 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 909a2a43a28b 13 hours ago 1.51GB 2025-11-01 15:21:16.553538 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 308dc86a2ead 13 hours ago 1.54GB 2025-11-01 15:21:16.553549 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 d0c82a0ec65c 13 hours ago 671MB 2025-11-01 15:21:16.553560 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 85001988322c 13 hours ago 267MB 2025-11-01 15:21:16.553570 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 155e1f38ae11 13 hours ago 449MB 2025-11-01 15:21:16.553581 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 eb0992b53bde 13 hours ago 293MB 2025-11-01 15:21:16.553592 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 d674d1069289 13 hours ago 307MB 2025-11-01 15:21:16.553602 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 01facc12637c 13 hours ago 302MB 2025-11-01 15:21:16.553613 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 1478fa905298 13 hours ago 358MB 2025-11-01 15:21:16.553624 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 b5ab78f4ac8f 13 hours ago 300MB 2025-11-01 15:21:16.553635 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 0e66e32d606a 13 hours ago 1.15GB 2025-11-01 15:21:16.553645 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 d5aa73840e10 13 hours ago 274MB 2025-11-01 15:21:16.553656 | orchestrator | registry.osism.tech/kolla/redis 2024.2 c4b13aebd387 13 hours ago 274MB 2025-11-01 15:21:16.553666 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 0fc4ae24ea0e 13 hours ago 280MB 2025-11-01 15:21:16.553677 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 aa9fcdd9c97e 13 hours ago 280MB 2025-11-01 15:21:16.553688 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 765dfa6912f5 13 hours ago 977MB 2025-11-01 15:21:16.553698 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 57373846f77c 13 hours ago 990MB 2025-11-01 15:21:16.553731 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 6ac3d58ce359 13 hours ago 986MB 2025-11-01 15:21:16.553743 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 f11ee4a81645 13 hours ago 985MB 2025-11-01 15:21:16.553753 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 a262eee2014d 13 hours ago 986MB 2025-11-01 15:21:16.553764 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 b21b61bdd6f3 13 hours ago 986MB 2025-11-01 15:21:16.553775 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 b387cac5b723 13 hours ago 990MB 2025-11-01 15:21:16.553800 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 fc49ef6499ee 13 hours ago 1.1GB 2025-11-01 15:21:16.553811 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 c7ad4d47bf86 13 hours ago 992MB 2025-11-01 15:21:16.553822 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 99f2c51e3b82 13 hours ago 991MB 2025-11-01 15:21:16.553833 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 0ea7bd5c41df 13 hours ago 992MB 2025-11-01 15:21:16.553843 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 c544fb2ad50e 13 hours ago 1.16GB 2025-11-01 15:21:16.553854 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 3aed99d10b53 13 hours ago 1.4GB 2025-11-01 15:21:16.553883 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 989948c1427d 13 hours ago 1.4GB 2025-11-01 15:21:16.553896 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 508223a03f2a 13 hours ago 1.13GB 2025-11-01 15:21:16.553908 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 38bab00d6975 13 hours ago 1.24GB 2025-11-01 15:21:16.553921 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 bae5a17370e4 13 hours ago 1.04GB 2025-11-01 15:21:16.553932 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 f333080e0f8b 13 hours ago 1.09GB 2025-11-01 15:21:16.553945 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 b9d44a046aa8 13 hours ago 1.04GB 2025-11-01 15:21:16.553957 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 ad98f2eeffe7 13 hours ago 1.05GB 2025-11-01 15:21:16.553970 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 c3d1f1672d13 13 hours ago 1.03GB 2025-11-01 15:21:16.553987 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 178f20b77a9f 13 hours ago 1.05GB 2025-11-01 15:21:16.554000 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 82f8f3aaed83 13 hours ago 1.03GB 2025-11-01 15:21:16.554054 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 4131f85c01c8 13 hours ago 1.03GB 2025-11-01 15:21:16.554068 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 bd151c181d88 13 hours ago 1.21GB 2025-11-01 15:21:16.554081 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 b1a4ecd753b1 13 hours ago 1.21GB 2025-11-01 15:21:16.554093 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 95866e00ca40 13 hours ago 1.37GB 2025-11-01 15:21:16.554105 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 39fb338b4a2c 13 hours ago 1.21GB 2025-11-01 15:21:16.554117 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 421bf1ebb80e 13 hours ago 841MB 2025-11-01 15:21:16.554129 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 30d4f67a1924 13 hours ago 841MB 2025-11-01 15:21:16.554141 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 822cd9e7eadc 13 hours ago 841MB 2025-11-01 15:21:16.554153 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 7c4750c0829d 13 hours ago 841MB 2025-11-01 15:21:16.884792 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-11-01 15:21:16.892876 | orchestrator | + set -e 2025-11-01 15:21:16.893090 | orchestrator | + source /opt/manager-vars.sh 2025-11-01 15:21:16.894634 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-11-01 15:21:16.894652 | orchestrator | ++ NUMBER_OF_NODES=6 2025-11-01 15:21:16.894663 | orchestrator | ++ export CEPH_VERSION=reef 2025-11-01 15:21:16.894674 | orchestrator | ++ CEPH_VERSION=reef 2025-11-01 15:21:16.894685 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-11-01 15:21:16.894697 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-11-01 15:21:16.894708 | orchestrator | ++ export MANAGER_VERSION=latest 2025-11-01 15:21:16.894719 | orchestrator | ++ MANAGER_VERSION=latest 2025-11-01 15:21:16.894730 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-11-01 15:21:16.894740 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-11-01 15:21:16.894751 | orchestrator | ++ export ARA=false 2025-11-01 15:21:16.894762 | orchestrator | ++ ARA=false 2025-11-01 15:21:16.894772 | orchestrator | ++ export DEPLOY_MODE=manager 2025-11-01 15:21:16.894788 | orchestrator | ++ DEPLOY_MODE=manager 2025-11-01 15:21:16.894799 | orchestrator | ++ export TEMPEST=false 2025-11-01 15:21:16.894810 | orchestrator | ++ TEMPEST=false 2025-11-01 15:21:16.894821 | orchestrator | ++ export IS_ZUUL=true 2025-11-01 15:21:16.894832 | orchestrator | ++ IS_ZUUL=true 2025-11-01 15:21:16.894843 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.208 2025-11-01 15:21:16.894854 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.208 2025-11-01 15:21:16.894864 | orchestrator | ++ export EXTERNAL_API=false 2025-11-01 15:21:16.894875 | orchestrator | ++ EXTERNAL_API=false 2025-11-01 15:21:16.894886 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-11-01 15:21:16.894896 | orchestrator | ++ IMAGE_USER=ubuntu 2025-11-01 15:21:16.894907 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-11-01 15:21:16.894918 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-11-01 15:21:16.894929 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-11-01 15:21:16.894939 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-11-01 15:21:16.894950 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-11-01 15:21:16.894961 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-11-01 15:21:16.902099 | orchestrator | + set -e 2025-11-01 15:21:16.902713 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-11-01 15:21:16.902731 | orchestrator | ++ export INTERACTIVE=false 2025-11-01 15:21:16.902742 | orchestrator | ++ INTERACTIVE=false 2025-11-01 15:21:16.902753 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-11-01 15:21:16.902764 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-11-01 15:21:16.902775 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-11-01 15:21:16.903714 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-11-01 15:21:16.910206 | orchestrator | 2025-11-01 15:21:16.910226 | orchestrator | # Ceph status 2025-11-01 15:21:16.910237 | orchestrator | ++ export MANAGER_VERSION=latest 2025-11-01 15:21:16.910248 | orchestrator | ++ MANAGER_VERSION=latest 2025-11-01 15:21:16.910259 | orchestrator | + echo 2025-11-01 15:21:16.910271 | orchestrator | + echo '# Ceph status' 2025-11-01 15:21:16.910313 | orchestrator | + echo 2025-11-01 15:21:16.910547 | orchestrator | 2025-11-01 15:21:16.910639 | orchestrator | + ceph -s 2025-11-01 15:21:17.493374 | orchestrator | cluster: 2025-11-01 15:21:17.493470 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-11-01 15:21:17.493486 | orchestrator | health: HEALTH_OK 2025-11-01 15:21:17.493498 | orchestrator | 2025-11-01 15:21:17.493509 | orchestrator | services: 2025-11-01 15:21:17.493520 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 75m) 2025-11-01 15:21:17.493545 | orchestrator | mgr: testbed-node-1(active, since 62m), standbys: testbed-node-2, testbed-node-0 2025-11-01 15:21:17.493558 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-11-01 15:21:17.493569 | orchestrator | osd: 6 osds: 6 up (since 71m), 6 in (since 72m) 2025-11-01 15:21:17.493580 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-11-01 15:21:17.493591 | orchestrator | 2025-11-01 15:21:17.493602 | orchestrator | data: 2025-11-01 15:21:17.493612 | orchestrator | volumes: 1/1 healthy 2025-11-01 15:21:17.493623 | orchestrator | pools: 14 pools, 401 pgs 2025-11-01 15:21:17.493634 | orchestrator | objects: 522 objects, 2.2 GiB 2025-11-01 15:21:17.493645 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-11-01 15:21:17.493656 | orchestrator | pgs: 401 active+clean 2025-11-01 15:21:17.493666 | orchestrator | 2025-11-01 15:21:17.541445 | orchestrator | 2025-11-01 15:21:17.541498 | orchestrator | # Ceph versions 2025-11-01 15:21:17.541510 | orchestrator | 2025-11-01 15:21:17.541522 | orchestrator | + echo 2025-11-01 15:21:17.541533 | orchestrator | + echo '# Ceph versions' 2025-11-01 15:21:17.541543 | orchestrator | + echo 2025-11-01 15:21:17.541554 | orchestrator | + ceph versions 2025-11-01 15:21:18.210836 | orchestrator | { 2025-11-01 15:21:18.210933 | orchestrator | "mon": { 2025-11-01 15:21:18.210949 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-11-01 15:21:18.210962 | orchestrator | }, 2025-11-01 15:21:18.210973 | orchestrator | "mgr": { 2025-11-01 15:21:18.210984 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-11-01 15:21:18.210995 | orchestrator | }, 2025-11-01 15:21:18.211005 | orchestrator | "osd": { 2025-11-01 15:21:18.211016 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-11-01 15:21:18.211027 | orchestrator | }, 2025-11-01 15:21:18.211038 | orchestrator | "mds": { 2025-11-01 15:21:18.211049 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-11-01 15:21:18.211059 | orchestrator | }, 2025-11-01 15:21:18.211070 | orchestrator | "rgw": { 2025-11-01 15:21:18.211081 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-11-01 15:21:18.211092 | orchestrator | }, 2025-11-01 15:21:18.211102 | orchestrator | "overall": { 2025-11-01 15:21:18.211113 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-11-01 15:21:18.211124 | orchestrator | } 2025-11-01 15:21:18.211135 | orchestrator | } 2025-11-01 15:21:18.258398 | orchestrator | 2025-11-01 15:21:18.258437 | orchestrator | # Ceph OSD tree 2025-11-01 15:21:18.258456 | orchestrator | + echo 2025-11-01 15:21:18.258475 | orchestrator | + echo '# Ceph OSD tree' 2025-11-01 15:21:18.258493 | orchestrator | + echo 2025-11-01 15:21:18.258512 | orchestrator | 2025-11-01 15:21:18.258530 | orchestrator | + ceph osd df tree 2025-11-01 15:21:18.817575 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-11-01 15:21:18.817679 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 410 MiB 113 GiB 5.90 1.00 - root default 2025-11-01 15:21:18.817694 | orchestrator | -7 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 0.99 - host testbed-node-3 2025-11-01 15:21:18.817706 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 62 MiB 19 GiB 5.67 0.96 186 up osd.0 2025-11-01 15:21:18.817717 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 62 MiB 19 GiB 6.06 1.03 202 up osd.4 2025-11-01 15:21:18.817728 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-11-01 15:21:18.817738 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1003 MiB 1 KiB 78 MiB 19 GiB 5.28 0.90 192 up osd.1 2025-11-01 15:21:18.817749 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 66 MiB 19 GiB 6.55 1.11 200 up osd.5 2025-11-01 15:21:18.817760 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-11-01 15:21:18.817770 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1001 MiB 923 MiB 1 KiB 78 MiB 19 GiB 4.89 0.83 196 up osd.2 2025-11-01 15:21:18.817781 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 66 MiB 19 GiB 6.94 1.18 194 up osd.3 2025-11-01 15:21:18.817792 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 410 MiB 113 GiB 5.90 2025-11-01 15:21:18.817803 | orchestrator | MIN/MAX VAR: 0.83/1.18 STDDEV: 0.71 2025-11-01 15:21:18.862727 | orchestrator | 2025-11-01 15:21:18.862777 | orchestrator | # Ceph monitor status 2025-11-01 15:21:18.862789 | orchestrator | 2025-11-01 15:21:18.862800 | orchestrator | + echo 2025-11-01 15:21:18.862812 | orchestrator | + echo '# Ceph monitor status' 2025-11-01 15:21:18.862823 | orchestrator | + echo 2025-11-01 15:21:18.862834 | orchestrator | + ceph mon stat 2025-11-01 15:21:19.471888 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-11-01 15:21:19.517678 | orchestrator | 2025-11-01 15:21:19.517709 | orchestrator | # Ceph quorum status 2025-11-01 15:21:19.517720 | orchestrator | 2025-11-01 15:21:19.517730 | orchestrator | + echo 2025-11-01 15:21:19.517740 | orchestrator | + echo '# Ceph quorum status' 2025-11-01 15:21:19.517749 | orchestrator | + echo 2025-11-01 15:21:19.518630 | orchestrator | + ceph quorum_status 2025-11-01 15:21:19.518649 | orchestrator | + jq 2025-11-01 15:21:20.138407 | orchestrator | { 2025-11-01 15:21:20.138485 | orchestrator | "election_epoch": 8, 2025-11-01 15:21:20.138519 | orchestrator | "quorum": [ 2025-11-01 15:21:20.138533 | orchestrator | 0, 2025-11-01 15:21:20.138544 | orchestrator | 1, 2025-11-01 15:21:20.138555 | orchestrator | 2 2025-11-01 15:21:20.138566 | orchestrator | ], 2025-11-01 15:21:20.138577 | orchestrator | "quorum_names": [ 2025-11-01 15:21:20.138588 | orchestrator | "testbed-node-0", 2025-11-01 15:21:20.138598 | orchestrator | "testbed-node-1", 2025-11-01 15:21:20.138609 | orchestrator | "testbed-node-2" 2025-11-01 15:21:20.138620 | orchestrator | ], 2025-11-01 15:21:20.138631 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-11-01 15:21:20.138644 | orchestrator | "quorum_age": 4513, 2025-11-01 15:21:20.138654 | orchestrator | "features": { 2025-11-01 15:21:20.138665 | orchestrator | "quorum_con": "4540138322906710015", 2025-11-01 15:21:20.138676 | orchestrator | "quorum_mon": [ 2025-11-01 15:21:20.138687 | orchestrator | "kraken", 2025-11-01 15:21:20.138698 | orchestrator | "luminous", 2025-11-01 15:21:20.138709 | orchestrator | "mimic", 2025-11-01 15:21:20.138719 | orchestrator | "osdmap-prune", 2025-11-01 15:21:20.138730 | orchestrator | "nautilus", 2025-11-01 15:21:20.138741 | orchestrator | "octopus", 2025-11-01 15:21:20.138752 | orchestrator | "pacific", 2025-11-01 15:21:20.138763 | orchestrator | "elector-pinging", 2025-11-01 15:21:20.138950 | orchestrator | "quincy", 2025-11-01 15:21:20.138963 | orchestrator | "reef" 2025-11-01 15:21:20.138974 | orchestrator | ] 2025-11-01 15:21:20.138985 | orchestrator | }, 2025-11-01 15:21:20.138996 | orchestrator | "monmap": { 2025-11-01 15:21:20.139007 | orchestrator | "epoch": 1, 2025-11-01 15:21:20.139017 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-11-01 15:21:20.139029 | orchestrator | "modified": "2025-11-01T14:05:46.765355Z", 2025-11-01 15:21:20.139039 | orchestrator | "created": "2025-11-01T14:05:46.765355Z", 2025-11-01 15:21:20.139050 | orchestrator | "min_mon_release": 18, 2025-11-01 15:21:20.139061 | orchestrator | "min_mon_release_name": "reef", 2025-11-01 15:21:20.139072 | orchestrator | "election_strategy": 1, 2025-11-01 15:21:20.139083 | orchestrator | "disallowed_leaders: ": "", 2025-11-01 15:21:20.139094 | orchestrator | "stretch_mode": false, 2025-11-01 15:21:20.139104 | orchestrator | "tiebreaker_mon": "", 2025-11-01 15:21:20.139115 | orchestrator | "removed_ranks: ": "", 2025-11-01 15:21:20.139125 | orchestrator | "features": { 2025-11-01 15:21:20.139136 | orchestrator | "persistent": [ 2025-11-01 15:21:20.139146 | orchestrator | "kraken", 2025-11-01 15:21:20.139157 | orchestrator | "luminous", 2025-11-01 15:21:20.139167 | orchestrator | "mimic", 2025-11-01 15:21:20.139178 | orchestrator | "osdmap-prune", 2025-11-01 15:21:20.139189 | orchestrator | "nautilus", 2025-11-01 15:21:20.139199 | orchestrator | "octopus", 2025-11-01 15:21:20.139210 | orchestrator | "pacific", 2025-11-01 15:21:20.139220 | orchestrator | "elector-pinging", 2025-11-01 15:21:20.139231 | orchestrator | "quincy", 2025-11-01 15:21:20.139241 | orchestrator | "reef" 2025-11-01 15:21:20.139252 | orchestrator | ], 2025-11-01 15:21:20.139263 | orchestrator | "optional": [] 2025-11-01 15:21:20.139274 | orchestrator | }, 2025-11-01 15:21:20.139304 | orchestrator | "mons": [ 2025-11-01 15:21:20.139315 | orchestrator | { 2025-11-01 15:21:20.139326 | orchestrator | "rank": 0, 2025-11-01 15:21:20.139337 | orchestrator | "name": "testbed-node-0", 2025-11-01 15:21:20.139347 | orchestrator | "public_addrs": { 2025-11-01 15:21:20.139358 | orchestrator | "addrvec": [ 2025-11-01 15:21:20.139368 | orchestrator | { 2025-11-01 15:21:20.139378 | orchestrator | "type": "v2", 2025-11-01 15:21:20.139389 | orchestrator | "addr": "192.168.16.10:3300", 2025-11-01 15:21:20.139399 | orchestrator | "nonce": 0 2025-11-01 15:21:20.139410 | orchestrator | }, 2025-11-01 15:21:20.139420 | orchestrator | { 2025-11-01 15:21:20.139431 | orchestrator | "type": "v1", 2025-11-01 15:21:20.139462 | orchestrator | "addr": "192.168.16.10:6789", 2025-11-01 15:21:20.139473 | orchestrator | "nonce": 0 2025-11-01 15:21:20.139484 | orchestrator | } 2025-11-01 15:21:20.139494 | orchestrator | ] 2025-11-01 15:21:20.139505 | orchestrator | }, 2025-11-01 15:21:20.139515 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-11-01 15:21:20.139526 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-11-01 15:21:20.139536 | orchestrator | "priority": 0, 2025-11-01 15:21:20.139547 | orchestrator | "weight": 0, 2025-11-01 15:21:20.139558 | orchestrator | "crush_location": "{}" 2025-11-01 15:21:20.139568 | orchestrator | }, 2025-11-01 15:21:20.139581 | orchestrator | { 2025-11-01 15:21:20.139593 | orchestrator | "rank": 1, 2025-11-01 15:21:20.139604 | orchestrator | "name": "testbed-node-1", 2025-11-01 15:21:20.139616 | orchestrator | "public_addrs": { 2025-11-01 15:21:20.139628 | orchestrator | "addrvec": [ 2025-11-01 15:21:20.139640 | orchestrator | { 2025-11-01 15:21:20.139652 | orchestrator | "type": "v2", 2025-11-01 15:21:20.139663 | orchestrator | "addr": "192.168.16.11:3300", 2025-11-01 15:21:20.139675 | orchestrator | "nonce": 0 2025-11-01 15:21:20.139687 | orchestrator | }, 2025-11-01 15:21:20.139699 | orchestrator | { 2025-11-01 15:21:20.139711 | orchestrator | "type": "v1", 2025-11-01 15:21:20.139723 | orchestrator | "addr": "192.168.16.11:6789", 2025-11-01 15:21:20.139735 | orchestrator | "nonce": 0 2025-11-01 15:21:20.139746 | orchestrator | } 2025-11-01 15:21:20.139758 | orchestrator | ] 2025-11-01 15:21:20.139778 | orchestrator | }, 2025-11-01 15:21:20.139790 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-11-01 15:21:20.139801 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-11-01 15:21:20.139813 | orchestrator | "priority": 0, 2025-11-01 15:21:20.139825 | orchestrator | "weight": 0, 2025-11-01 15:21:20.139837 | orchestrator | "crush_location": "{}" 2025-11-01 15:21:20.139849 | orchestrator | }, 2025-11-01 15:21:20.139861 | orchestrator | { 2025-11-01 15:21:20.139872 | orchestrator | "rank": 2, 2025-11-01 15:21:20.139884 | orchestrator | "name": "testbed-node-2", 2025-11-01 15:21:20.139896 | orchestrator | "public_addrs": { 2025-11-01 15:21:20.139908 | orchestrator | "addrvec": [ 2025-11-01 15:21:20.139920 | orchestrator | { 2025-11-01 15:21:20.139932 | orchestrator | "type": "v2", 2025-11-01 15:21:20.139943 | orchestrator | "addr": "192.168.16.12:3300", 2025-11-01 15:21:20.139953 | orchestrator | "nonce": 0 2025-11-01 15:21:20.139964 | orchestrator | }, 2025-11-01 15:21:20.139974 | orchestrator | { 2025-11-01 15:21:20.139984 | orchestrator | "type": "v1", 2025-11-01 15:21:20.139995 | orchestrator | "addr": "192.168.16.12:6789", 2025-11-01 15:21:20.140005 | orchestrator | "nonce": 0 2025-11-01 15:21:20.140016 | orchestrator | } 2025-11-01 15:21:20.140026 | orchestrator | ] 2025-11-01 15:21:20.140037 | orchestrator | }, 2025-11-01 15:21:20.140047 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-11-01 15:21:20.140058 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-11-01 15:21:20.140069 | orchestrator | "priority": 0, 2025-11-01 15:21:20.140079 | orchestrator | "weight": 0, 2025-11-01 15:21:20.140089 | orchestrator | "crush_location": "{}" 2025-11-01 15:21:20.140100 | orchestrator | } 2025-11-01 15:21:20.140110 | orchestrator | ] 2025-11-01 15:21:20.140121 | orchestrator | } 2025-11-01 15:21:20.140131 | orchestrator | } 2025-11-01 15:21:20.140152 | orchestrator | 2025-11-01 15:21:20.140164 | orchestrator | # Ceph free space status 2025-11-01 15:21:20.140175 | orchestrator | + echo 2025-11-01 15:21:20.140185 | orchestrator | + echo '# Ceph free space status' 2025-11-01 15:21:20.140196 | orchestrator | + echo 2025-11-01 15:21:20.140206 | orchestrator | 2025-11-01 15:21:20.140217 | orchestrator | + ceph df 2025-11-01 15:21:20.761226 | orchestrator | --- RAW STORAGE --- 2025-11-01 15:21:20.767139 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-11-01 15:21:20.767165 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.90 2025-11-01 15:21:20.767179 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.90 2025-11-01 15:21:20.767193 | orchestrator | 2025-11-01 15:21:20.767205 | orchestrator | --- POOLS --- 2025-11-01 15:21:20.767217 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-11-01 15:21:20.767229 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-11-01 15:21:20.767240 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-11-01 15:21:20.767273 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-11-01 15:21:20.767305 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-11-01 15:21:20.767317 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-11-01 15:21:20.767328 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-11-01 15:21:20.767339 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-11-01 15:21:20.767349 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-11-01 15:21:20.767360 | orchestrator | .rgw.root 9 32 2.2 KiB 6 48 KiB 0 53 GiB 2025-11-01 15:21:20.767371 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-11-01 15:21:20.767381 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-11-01 15:21:20.767392 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.93 35 GiB 2025-11-01 15:21:20.767402 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-11-01 15:21:20.767413 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-11-01 15:21:20.807817 | orchestrator | ++ semver latest 5.0.0 2025-11-01 15:21:20.860150 | orchestrator | + [[ -1 -eq -1 ]] 2025-11-01 15:21:20.860179 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-11-01 15:21:20.860190 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-11-01 15:21:20.860201 | orchestrator | + osism apply facts 2025-11-01 15:21:33.078164 | orchestrator | 2025-11-01 15:21:33 | INFO  | Task 0ccd5265-1739-4f00-b5dd-7565dad0efab (facts) was prepared for execution. 2025-11-01 15:21:33.078338 | orchestrator | 2025-11-01 15:21:33 | INFO  | It takes a moment until task 0ccd5265-1739-4f00-b5dd-7565dad0efab (facts) has been started and output is visible here. 2025-11-01 15:21:49.188429 | orchestrator | 2025-11-01 15:21:49.188546 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-11-01 15:21:49.188563 | orchestrator | 2025-11-01 15:21:49.188575 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-11-01 15:21:49.188587 | orchestrator | Saturday 01 November 2025 15:21:37 +0000 (0:00:00.295) 0:00:00.295 ***** 2025-11-01 15:21:49.188598 | orchestrator | ok: [testbed-manager] 2025-11-01 15:21:49.188610 | orchestrator | ok: [testbed-node-1] 2025-11-01 15:21:49.188621 | orchestrator | ok: [testbed-node-0] 2025-11-01 15:21:49.188631 | orchestrator | ok: [testbed-node-2] 2025-11-01 15:21:49.188642 | orchestrator | ok: [testbed-node-3] 2025-11-01 15:21:49.188653 | orchestrator | ok: [testbed-node-4] 2025-11-01 15:21:49.188663 | orchestrator | ok: [testbed-node-5] 2025-11-01 15:21:49.188674 | orchestrator | 2025-11-01 15:21:49.188685 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-11-01 15:21:49.188695 | orchestrator | Saturday 01 November 2025 15:21:39 +0000 (0:00:01.907) 0:00:02.202 ***** 2025-11-01 15:21:49.188706 | orchestrator | skipping: [testbed-manager] 2025-11-01 15:21:49.188718 | orchestrator | skipping: [testbed-node-0] 2025-11-01 15:21:49.188728 | orchestrator | skipping: [testbed-node-1] 2025-11-01 15:21:49.188739 | orchestrator | skipping: [testbed-node-2] 2025-11-01 15:21:49.188750 | orchestrator | skipping: [testbed-node-3] 2025-11-01 15:21:49.188760 | orchestrator | skipping: [testbed-node-4] 2025-11-01 15:21:49.188771 | orchestrator | skipping: [testbed-node-5] 2025-11-01 15:21:49.188782 | orchestrator | 2025-11-01 15:21:49.188792 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-11-01 15:21:49.188803 | orchestrator | 2025-11-01 15:21:49.188814 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-01 15:21:49.188825 | orchestrator | Saturday 01 November 2025 15:21:40 +0000 (0:00:01.393) 0:00:03.596 ***** 2025-11-01 15:21:49.188836 | orchestrator | ok: [testbed-node-2] 2025-11-01 15:21:49.188847 | orchestrator | ok: [testbed-node-1] 2025-11-01 15:21:49.188857 | orchestrator | ok: [testbed-node-0] 2025-11-01 15:21:49.188893 | orchestrator | ok: [testbed-manager] 2025-11-01 15:21:49.188905 | orchestrator | ok: [testbed-node-4] 2025-11-01 15:21:49.188916 | orchestrator | ok: [testbed-node-5] 2025-11-01 15:21:49.188928 | orchestrator | ok: [testbed-node-3] 2025-11-01 15:21:49.188940 | orchestrator | 2025-11-01 15:21:49.188951 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-11-01 15:21:49.188963 | orchestrator | 2025-11-01 15:21:49.188974 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-11-01 15:21:49.188987 | orchestrator | Saturday 01 November 2025 15:21:48 +0000 (0:00:07.277) 0:00:10.874 ***** 2025-11-01 15:21:49.188999 | orchestrator | skipping: [testbed-manager] 2025-11-01 15:21:49.189011 | orchestrator | skipping: [testbed-node-0] 2025-11-01 15:21:49.189022 | orchestrator | skipping: [testbed-node-1] 2025-11-01 15:21:49.189034 | orchestrator | skipping: [testbed-node-2] 2025-11-01 15:21:49.189046 | orchestrator | skipping: [testbed-node-3] 2025-11-01 15:21:49.189058 | orchestrator | skipping: [testbed-node-4] 2025-11-01 15:21:49.189070 | orchestrator | skipping: [testbed-node-5] 2025-11-01 15:21:49.189081 | orchestrator | 2025-11-01 15:21:49.189094 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 15:21:49.189106 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 15:21:49.189120 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 15:21:49.189132 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 15:21:49.189144 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 15:21:49.189156 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 15:21:49.189168 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 15:21:49.189180 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 15:21:49.189191 | orchestrator | 2025-11-01 15:21:49.189203 | orchestrator | 2025-11-01 15:21:49.189216 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 15:21:49.189227 | orchestrator | Saturday 01 November 2025 15:21:48 +0000 (0:00:00.580) 0:00:11.455 ***** 2025-11-01 15:21:49.189240 | orchestrator | =============================================================================== 2025-11-01 15:21:49.189252 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.28s 2025-11-01 15:21:49.189264 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.91s 2025-11-01 15:21:49.189276 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.39s 2025-11-01 15:21:49.189309 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.58s 2025-11-01 15:21:49.531656 | orchestrator | + osism validate ceph-mons 2025-11-01 15:22:22.236690 | orchestrator | 2025-11-01 15:22:22.236808 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-11-01 15:22:22.236825 | orchestrator | 2025-11-01 15:22:22.236836 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-11-01 15:22:22.236847 | orchestrator | Saturday 01 November 2025 15:22:06 +0000 (0:00:00.463) 0:00:00.463 ***** 2025-11-01 15:22:22.236867 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-01 15:22:22.236877 | orchestrator | 2025-11-01 15:22:22.236887 | orchestrator | TASK [Create report output directory] ****************************************** 2025-11-01 15:22:22.236897 | orchestrator | Saturday 01 November 2025 15:22:07 +0000 (0:00:00.847) 0:00:01.310 ***** 2025-11-01 15:22:22.236923 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-01 15:22:22.236933 | orchestrator | 2025-11-01 15:22:22.236943 | orchestrator | TASK [Define report vars] ****************************************************** 2025-11-01 15:22:22.236952 | orchestrator | Saturday 01 November 2025 15:22:08 +0000 (0:00:01.007) 0:00:02.317 ***** 2025-11-01 15:22:22.236962 | orchestrator | ok: [testbed-node-0] 2025-11-01 15:22:22.236973 | orchestrator | 2025-11-01 15:22:22.236983 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-11-01 15:22:22.236992 | orchestrator | Saturday 01 November 2025 15:22:08 +0000 (0:00:00.118) 0:00:02.436 ***** 2025-11-01 15:22:22.237002 | orchestrator | ok: [testbed-node-0] 2025-11-01 15:22:22.237011 | orchestrator | ok: [testbed-node-1] 2025-11-01 15:22:22.237021 | orchestrator | ok: [testbed-node-2] 2025-11-01 15:22:22.237030 | orchestrator | 2025-11-01 15:22:22.237040 | orchestrator | TASK [Get container info] ****************************************************** 2025-11-01 15:22:22.237050 | orchestrator | Saturday 01 November 2025 15:22:08 +0000 (0:00:00.301) 0:00:02.737 ***** 2025-11-01 15:22:22.237059 | orchestrator | ok: [testbed-node-1] 2025-11-01 15:22:22.237069 | orchestrator | ok: [testbed-node-2] 2025-11-01 15:22:22.237078 | orchestrator | ok: [testbed-node-0] 2025-11-01 15:22:22.237088 | orchestrator | 2025-11-01 15:22:22.237097 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-11-01 15:22:22.237107 | orchestrator | Saturday 01 November 2025 15:22:09 +0000 (0:00:01.011) 0:00:03.749 ***** 2025-11-01 15:22:22.237117 | orchestrator | skipping: [testbed-node-0] 2025-11-01 15:22:22.237127 | orchestrator | skipping: [testbed-node-1] 2025-11-01 15:22:22.237136 | orchestrator | skipping: [testbed-node-2] 2025-11-01 15:22:22.237146 | orchestrator | 2025-11-01 15:22:22.237155 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-11-01 15:22:22.237165 | orchestrator | Saturday 01 November 2025 15:22:09 +0000 (0:00:00.302) 0:00:04.051 ***** 2025-11-01 15:22:22.237175 | orchestrator | ok: [testbed-node-0] 2025-11-01 15:22:22.237184 | orchestrator | ok: [testbed-node-1] 2025-11-01 15:22:22.237194 | orchestrator | ok: [testbed-node-2] 2025-11-01 15:22:22.237203 | orchestrator | 2025-11-01 15:22:22.237213 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-11-01 15:22:22.237226 | orchestrator | Saturday 01 November 2025 15:22:10 +0000 (0:00:00.503) 0:00:04.554 ***** 2025-11-01 15:22:22.237238 | orchestrator | ok: [testbed-node-0] 2025-11-01 15:22:22.237249 | orchestrator | ok: [testbed-node-1] 2025-11-01 15:22:22.237259 | orchestrator | ok: [testbed-node-2] 2025-11-01 15:22:22.237270 | orchestrator | 2025-11-01 15:22:22.237281 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-11-01 15:22:22.237319 | orchestrator | Saturday 01 November 2025 15:22:10 +0000 (0:00:00.331) 0:00:04.886 ***** 2025-11-01 15:22:22.237334 | orchestrator | skipping: [testbed-node-0] 2025-11-01 15:22:22.237349 | orchestrator | skipping: [testbed-node-1] 2025-11-01 15:22:22.237365 | orchestrator | skipping: [testbed-node-2] 2025-11-01 15:22:22.237382 | orchestrator | 2025-11-01 15:22:22.237397 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-11-01 15:22:22.237414 | orchestrator | Saturday 01 November 2025 15:22:10 +0000 (0:00:00.305) 0:00:05.192 ***** 2025-11-01 15:22:22.237429 | orchestrator | ok: [testbed-node-0] 2025-11-01 15:22:22.237444 | orchestrator | ok: [testbed-node-1] 2025-11-01 15:22:22.237459 | orchestrator | ok: [testbed-node-2] 2025-11-01 15:22:22.237475 | orchestrator | 2025-11-01 15:22:22.237493 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-11-01 15:22:22.237510 | orchestrator | Saturday 01 November 2025 15:22:11 +0000 (0:00:00.583) 0:00:05.775 ***** 2025-11-01 15:22:22.237526 | orchestrator | skipping: [testbed-node-0] 2025-11-01 15:22:22.237542 | orchestrator | 2025-11-01 15:22:22.237559 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-11-01 15:22:22.237575 | orchestrator | Saturday 01 November 2025 15:22:11 +0000 (0:00:00.252) 0:00:06.028 ***** 2025-11-01 15:22:22.237602 | orchestrator | skipping: [testbed-node-0] 2025-11-01 15:22:22.237613 | orchestrator | 2025-11-01 15:22:22.237623 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-11-01 15:22:22.237632 | orchestrator | Saturday 01 November 2025 15:22:12 +0000 (0:00:00.257) 0:00:06.286 ***** 2025-11-01 15:22:22.237641 | orchestrator | skipping: [testbed-node-0] 2025-11-01 15:22:22.237651 | orchestrator | 2025-11-01 15:22:22.237660 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 15:22:22.237670 | orchestrator | Saturday 01 November 2025 15:22:12 +0000 (0:00:00.264) 0:00:06.550 ***** 2025-11-01 15:22:22.237679 | orchestrator | 2025-11-01 15:22:22.237689 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 15:22:22.237698 | orchestrator | Saturday 01 November 2025 15:22:12 +0000 (0:00:00.071) 0:00:06.622 ***** 2025-11-01 15:22:22.237708 | orchestrator | 2025-11-01 15:22:22.237717 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 15:22:22.237727 | orchestrator | Saturday 01 November 2025 15:22:12 +0000 (0:00:00.072) 0:00:06.694 ***** 2025-11-01 15:22:22.237736 | orchestrator | 2025-11-01 15:22:22.237746 | orchestrator | TASK [Print report file information] ******************************************* 2025-11-01 15:22:22.237755 | orchestrator | Saturday 01 November 2025 15:22:12 +0000 (0:00:00.085) 0:00:06.779 ***** 2025-11-01 15:22:22.237765 | orchestrator | skipping: [testbed-node-0] 2025-11-01 15:22:22.237774 | orchestrator | 2025-11-01 15:22:22.237784 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-11-01 15:22:22.237794 | orchestrator | Saturday 01 November 2025 15:22:12 +0000 (0:00:00.254) 0:00:07.033 ***** 2025-11-01 15:22:22.237806 | orchestrator | skipping: [testbed-node-0] 2025-11-01 15:22:22.237823 | orchestrator | 2025-11-01 15:22:22.237864 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-11-01 15:22:22.237889 | orchestrator | Saturday 01 November 2025 15:22:13 +0000 (0:00:00.252) 0:00:07.286 ***** 2025-11-01 15:22:22.237906 | orchestrator | ok: [testbed-node-0] 2025-11-01 15:22:22.237918 | orchestrator | 2025-11-01 15:22:22.237928 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-11-01 15:22:22.237937 | orchestrator | Saturday 01 November 2025 15:22:13 +0000 (0:00:00.127) 0:00:07.413 ***** 2025-11-01 15:22:22.237947 | orchestrator | changed: [testbed-node-0] 2025-11-01 15:22:22.237956 | orchestrator | 2025-11-01 15:22:22.237966 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-11-01 15:22:22.237975 | orchestrator | Saturday 01 November 2025 15:22:14 +0000 (0:00:01.624) 0:00:09.037 ***** 2025-11-01 15:22:22.237985 | orchestrator | ok: [testbed-node-0] 2025-11-01 15:22:22.237994 | orchestrator | 2025-11-01 15:22:22.238004 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-11-01 15:22:22.238075 | orchestrator | Saturday 01 November 2025 15:22:15 +0000 (0:00:00.519) 0:00:09.557 ***** 2025-11-01 15:22:22.238089 | orchestrator | skipping: [testbed-node-0] 2025-11-01 15:22:22.238099 | orchestrator | 2025-11-01 15:22:22.238108 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-11-01 15:22:22.238118 | orchestrator | Saturday 01 November 2025 15:22:15 +0000 (0:00:00.146) 0:00:09.704 ***** 2025-11-01 15:22:22.238127 | orchestrator | ok: [testbed-node-0] 2025-11-01 15:22:22.238137 | orchestrator | 2025-11-01 15:22:22.238146 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-11-01 15:22:22.238156 | orchestrator | Saturday 01 November 2025 15:22:15 +0000 (0:00:00.322) 0:00:10.027 ***** 2025-11-01 15:22:22.238165 | orchestrator | ok: [testbed-node-0] 2025-11-01 15:22:22.238175 | orchestrator | 2025-11-01 15:22:22.238184 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-11-01 15:22:22.238194 | orchestrator | Saturday 01 November 2025 15:22:16 +0000 (0:00:00.324) 0:00:10.351 ***** 2025-11-01 15:22:22.238203 | orchestrator | skipping: [testbed-node-0] 2025-11-01 15:22:22.238213 | orchestrator | 2025-11-01 15:22:22.238222 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-11-01 15:22:22.238241 | orchestrator | Saturday 01 November 2025 15:22:16 +0000 (0:00:00.122) 0:00:10.474 ***** 2025-11-01 15:22:22.238250 | orchestrator | ok: [testbed-node-0] 2025-11-01 15:22:22.238260 | orchestrator | 2025-11-01 15:22:22.238269 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-11-01 15:22:22.238279 | orchestrator | Saturday 01 November 2025 15:22:16 +0000 (0:00:00.137) 0:00:10.612 ***** 2025-11-01 15:22:22.238289 | orchestrator | ok: [testbed-node-0] 2025-11-01 15:22:22.238324 | orchestrator | 2025-11-01 15:22:22.238334 | orchestrator | TASK [Gather status data] ****************************************************** 2025-11-01 15:22:22.238344 | orchestrator | Saturday 01 November 2025 15:22:16 +0000 (0:00:00.121) 0:00:10.733 ***** 2025-11-01 15:22:22.238353 | orchestrator | changed: [testbed-node-0] 2025-11-01 15:22:22.238410 | orchestrator | 2025-11-01 15:22:22.238422 | orchestrator | TASK [Set health test data] **************************************************** 2025-11-01 15:22:22.238431 | orchestrator | Saturday 01 November 2025 15:22:17 +0000 (0:00:01.435) 0:00:12.169 ***** 2025-11-01 15:22:22.238441 | orchestrator | ok: [testbed-node-0] 2025-11-01 15:22:22.238451 | orchestrator | 2025-11-01 15:22:22.238464 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-11-01 15:22:22.238481 | orchestrator | Saturday 01 November 2025 15:22:18 +0000 (0:00:00.330) 0:00:12.499 ***** 2025-11-01 15:22:22.238491 | orchestrator | skipping: [testbed-node-0] 2025-11-01 15:22:22.238501 | orchestrator | 2025-11-01 15:22:22.238510 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-11-01 15:22:22.238520 | orchestrator | Saturday 01 November 2025 15:22:18 +0000 (0:00:00.151) 0:00:12.650 ***** 2025-11-01 15:22:22.238529 | orchestrator | ok: [testbed-node-0] 2025-11-01 15:22:22.238538 | orchestrator | 2025-11-01 15:22:22.238548 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-11-01 15:22:22.238557 | orchestrator | Saturday 01 November 2025 15:22:18 +0000 (0:00:00.157) 0:00:12.808 ***** 2025-11-01 15:22:22.238567 | orchestrator | skipping: [testbed-node-0] 2025-11-01 15:22:22.238576 | orchestrator | 2025-11-01 15:22:22.238585 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-11-01 15:22:22.238596 | orchestrator | Saturday 01 November 2025 15:22:18 +0000 (0:00:00.148) 0:00:12.957 ***** 2025-11-01 15:22:22.238612 | orchestrator | skipping: [testbed-node-0] 2025-11-01 15:22:22.238628 | orchestrator | 2025-11-01 15:22:22.238644 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-11-01 15:22:22.238660 | orchestrator | Saturday 01 November 2025 15:22:19 +0000 (0:00:00.352) 0:00:13.310 ***** 2025-11-01 15:22:22.238686 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-01 15:22:22.238703 | orchestrator | 2025-11-01 15:22:22.238720 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-11-01 15:22:22.238735 | orchestrator | Saturday 01 November 2025 15:22:19 +0000 (0:00:00.260) 0:00:13.570 ***** 2025-11-01 15:22:22.238745 | orchestrator | skipping: [testbed-node-0] 2025-11-01 15:22:22.238754 | orchestrator | 2025-11-01 15:22:22.238763 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-11-01 15:22:22.238773 | orchestrator | Saturday 01 November 2025 15:22:19 +0000 (0:00:00.280) 0:00:13.851 ***** 2025-11-01 15:22:22.238782 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-01 15:22:22.238792 | orchestrator | 2025-11-01 15:22:22.238801 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-11-01 15:22:22.238811 | orchestrator | Saturday 01 November 2025 15:22:21 +0000 (0:00:01.844) 0:00:15.695 ***** 2025-11-01 15:22:22.238820 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-01 15:22:22.238830 | orchestrator | 2025-11-01 15:22:22.238839 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-11-01 15:22:22.238849 | orchestrator | Saturday 01 November 2025 15:22:21 +0000 (0:00:00.278) 0:00:15.974 ***** 2025-11-01 15:22:22.238858 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-01 15:22:22.238868 | orchestrator | 2025-11-01 15:22:22.238898 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 15:22:25.044397 | orchestrator | Saturday 01 November 2025 15:22:22 +0000 (0:00:00.266) 0:00:16.240 ***** 2025-11-01 15:22:25.044491 | orchestrator | 2025-11-01 15:22:25.044506 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 15:22:25.044517 | orchestrator | Saturday 01 November 2025 15:22:22 +0000 (0:00:00.072) 0:00:16.313 ***** 2025-11-01 15:22:25.044528 | orchestrator | 2025-11-01 15:22:25.044539 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 15:22:25.044549 | orchestrator | Saturday 01 November 2025 15:22:22 +0000 (0:00:00.073) 0:00:16.386 ***** 2025-11-01 15:22:25.044560 | orchestrator | 2025-11-01 15:22:25.044571 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-11-01 15:22:25.044581 | orchestrator | Saturday 01 November 2025 15:22:22 +0000 (0:00:00.075) 0:00:16.462 ***** 2025-11-01 15:22:25.044592 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-01 15:22:25.044603 | orchestrator | 2025-11-01 15:22:25.044613 | orchestrator | TASK [Print report file information] ******************************************* 2025-11-01 15:22:25.044624 | orchestrator | Saturday 01 November 2025 15:22:23 +0000 (0:00:01.594) 0:00:18.056 ***** 2025-11-01 15:22:25.044635 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-11-01 15:22:25.044645 | orchestrator |  "msg": [ 2025-11-01 15:22:25.044657 | orchestrator |  "Validator run completed.", 2025-11-01 15:22:25.044668 | orchestrator |  "You can find the report file here:", 2025-11-01 15:22:25.044679 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-11-01T15:22:06+00:00-report.json", 2025-11-01 15:22:25.044689 | orchestrator |  "on the following host:", 2025-11-01 15:22:25.044700 | orchestrator |  "testbed-manager" 2025-11-01 15:22:25.044711 | orchestrator |  ] 2025-11-01 15:22:25.044721 | orchestrator | } 2025-11-01 15:22:25.044732 | orchestrator | 2025-11-01 15:22:25.044743 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 15:22:25.044755 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-11-01 15:22:25.044767 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 15:22:25.044799 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 15:22:25.044810 | orchestrator | 2025-11-01 15:22:25.044821 | orchestrator | 2025-11-01 15:22:25.044831 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 15:22:25.044847 | orchestrator | Saturday 01 November 2025 15:22:24 +0000 (0:00:00.856) 0:00:18.912 ***** 2025-11-01 15:22:25.044858 | orchestrator | =============================================================================== 2025-11-01 15:22:25.044868 | orchestrator | Aggregate test results step one ----------------------------------------- 1.84s 2025-11-01 15:22:25.044879 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.62s 2025-11-01 15:22:25.044889 | orchestrator | Write report file ------------------------------------------------------- 1.59s 2025-11-01 15:22:25.044902 | orchestrator | Gather status data ------------------------------------------------------ 1.44s 2025-11-01 15:22:25.044914 | orchestrator | Get container info ------------------------------------------------------ 1.01s 2025-11-01 15:22:25.044926 | orchestrator | Create report output directory ------------------------------------------ 1.01s 2025-11-01 15:22:25.044938 | orchestrator | Print report file information ------------------------------------------- 0.86s 2025-11-01 15:22:25.044950 | orchestrator | Get timestamp for report file ------------------------------------------- 0.85s 2025-11-01 15:22:25.044962 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.58s 2025-11-01 15:22:25.044975 | orchestrator | Set quorum test data ---------------------------------------------------- 0.52s 2025-11-01 15:22:25.045007 | orchestrator | Set test result to passed if container is existing ---------------------- 0.50s 2025-11-01 15:22:25.045020 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.35s 2025-11-01 15:22:25.045032 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2025-11-01 15:22:25.045044 | orchestrator | Set health test data ---------------------------------------------------- 0.33s 2025-11-01 15:22:25.045056 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.32s 2025-11-01 15:22:25.045069 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.32s 2025-11-01 15:22:25.045081 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.31s 2025-11-01 15:22:25.045093 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2025-11-01 15:22:25.045105 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2025-11-01 15:22:25.045117 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.28s 2025-11-01 15:22:25.386813 | orchestrator | + osism validate ceph-mgrs 2025-11-01 15:22:57.168377 | orchestrator | 2025-11-01 15:22:57.168496 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-11-01 15:22:57.168512 | orchestrator | 2025-11-01 15:22:57.168523 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-11-01 15:22:57.168535 | orchestrator | Saturday 01 November 2025 15:22:42 +0000 (0:00:00.451) 0:00:00.451 ***** 2025-11-01 15:22:57.168547 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-01 15:22:57.168558 | orchestrator | 2025-11-01 15:22:57.168569 | orchestrator | TASK [Create report output directory] ****************************************** 2025-11-01 15:22:57.168579 | orchestrator | Saturday 01 November 2025 15:22:42 +0000 (0:00:00.845) 0:00:01.296 ***** 2025-11-01 15:22:57.168590 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-01 15:22:57.168601 | orchestrator | 2025-11-01 15:22:57.168612 | orchestrator | TASK [Define report vars] ****************************************************** 2025-11-01 15:22:57.168623 | orchestrator | Saturday 01 November 2025 15:22:44 +0000 (0:00:01.053) 0:00:02.350 ***** 2025-11-01 15:22:57.168634 | orchestrator | ok: [testbed-node-0] 2025-11-01 15:22:57.168645 | orchestrator | 2025-11-01 15:22:57.168656 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-11-01 15:22:57.168667 | orchestrator | Saturday 01 November 2025 15:22:44 +0000 (0:00:00.137) 0:00:02.488 ***** 2025-11-01 15:22:57.168677 | orchestrator | ok: [testbed-node-0] 2025-11-01 15:22:57.168688 | orchestrator | ok: [testbed-node-1] 2025-11-01 15:22:57.168699 | orchestrator | ok: [testbed-node-2] 2025-11-01 15:22:57.168709 | orchestrator | 2025-11-01 15:22:57.168720 | orchestrator | TASK [Get container info] ****************************************************** 2025-11-01 15:22:57.168731 | orchestrator | Saturday 01 November 2025 15:22:44 +0000 (0:00:00.303) 0:00:02.792 ***** 2025-11-01 15:22:57.168742 | orchestrator | ok: [testbed-node-1] 2025-11-01 15:22:57.168752 | orchestrator | ok: [testbed-node-2] 2025-11-01 15:22:57.168764 | orchestrator | ok: [testbed-node-0] 2025-11-01 15:22:57.168774 | orchestrator | 2025-11-01 15:22:57.168785 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-11-01 15:22:57.168796 | orchestrator | Saturday 01 November 2025 15:22:45 +0000 (0:00:01.082) 0:00:03.875 ***** 2025-11-01 15:22:57.168807 | orchestrator | skipping: [testbed-node-0] 2025-11-01 15:22:57.168818 | orchestrator | skipping: [testbed-node-1] 2025-11-01 15:22:57.168829 | orchestrator | skipping: [testbed-node-2] 2025-11-01 15:22:57.168840 | orchestrator | 2025-11-01 15:22:57.168850 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-11-01 15:22:57.168861 | orchestrator | Saturday 01 November 2025 15:22:45 +0000 (0:00:00.302) 0:00:04.177 ***** 2025-11-01 15:22:57.168873 | orchestrator | ok: [testbed-node-0] 2025-11-01 15:22:57.168885 | orchestrator | ok: [testbed-node-1] 2025-11-01 15:22:57.168897 | orchestrator | ok: [testbed-node-2] 2025-11-01 15:22:57.168932 | orchestrator | 2025-11-01 15:22:57.168945 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-11-01 15:22:57.168958 | orchestrator | Saturday 01 November 2025 15:22:46 +0000 (0:00:00.504) 0:00:04.681 ***** 2025-11-01 15:22:57.168969 | orchestrator | ok: [testbed-node-0] 2025-11-01 15:22:57.168979 | orchestrator | ok: [testbed-node-1] 2025-11-01 15:22:57.168990 | orchestrator | ok: [testbed-node-2] 2025-11-01 15:22:57.169001 | orchestrator | 2025-11-01 15:22:57.169012 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-11-01 15:22:57.169022 | orchestrator | Saturday 01 November 2025 15:22:46 +0000 (0:00:00.311) 0:00:04.993 ***** 2025-11-01 15:22:57.169033 | orchestrator | skipping: [testbed-node-0] 2025-11-01 15:22:57.169044 | orchestrator | skipping: [testbed-node-1] 2025-11-01 15:22:57.169069 | orchestrator | skipping: [testbed-node-2] 2025-11-01 15:22:57.169080 | orchestrator | 2025-11-01 15:22:57.169091 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-11-01 15:22:57.169102 | orchestrator | Saturday 01 November 2025 15:22:46 +0000 (0:00:00.311) 0:00:05.305 ***** 2025-11-01 15:22:57.169112 | orchestrator | ok: [testbed-node-0] 2025-11-01 15:22:57.169123 | orchestrator | ok: [testbed-node-1] 2025-11-01 15:22:57.169134 | orchestrator | ok: [testbed-node-2] 2025-11-01 15:22:57.169144 | orchestrator | 2025-11-01 15:22:57.169155 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-11-01 15:22:57.169166 | orchestrator | Saturday 01 November 2025 15:22:47 +0000 (0:00:00.510) 0:00:05.815 ***** 2025-11-01 15:22:57.169176 | orchestrator | skipping: [testbed-node-0] 2025-11-01 15:22:57.169187 | orchestrator | 2025-11-01 15:22:57.169198 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-11-01 15:22:57.169209 | orchestrator | Saturday 01 November 2025 15:22:47 +0000 (0:00:00.254) 0:00:06.070 ***** 2025-11-01 15:22:57.169219 | orchestrator | skipping: [testbed-node-0] 2025-11-01 15:22:57.169230 | orchestrator | 2025-11-01 15:22:57.169241 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-11-01 15:22:57.169252 | orchestrator | Saturday 01 November 2025 15:22:48 +0000 (0:00:00.301) 0:00:06.372 ***** 2025-11-01 15:22:57.169262 | orchestrator | skipping: [testbed-node-0] 2025-11-01 15:22:57.169273 | orchestrator | 2025-11-01 15:22:57.169284 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 15:22:57.169317 | orchestrator | Saturday 01 November 2025 15:22:48 +0000 (0:00:00.278) 0:00:06.650 ***** 2025-11-01 15:22:57.169329 | orchestrator | 2025-11-01 15:22:57.169340 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 15:22:57.169351 | orchestrator | Saturday 01 November 2025 15:22:48 +0000 (0:00:00.074) 0:00:06.725 ***** 2025-11-01 15:22:57.169361 | orchestrator | 2025-11-01 15:22:57.169372 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 15:22:57.169383 | orchestrator | Saturday 01 November 2025 15:22:48 +0000 (0:00:00.075) 0:00:06.801 ***** 2025-11-01 15:22:57.169393 | orchestrator | 2025-11-01 15:22:57.169404 | orchestrator | TASK [Print report file information] ******************************************* 2025-11-01 15:22:57.169415 | orchestrator | Saturday 01 November 2025 15:22:48 +0000 (0:00:00.080) 0:00:06.881 ***** 2025-11-01 15:22:57.169425 | orchestrator | skipping: [testbed-node-0] 2025-11-01 15:22:57.169436 | orchestrator | 2025-11-01 15:22:57.169447 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-11-01 15:22:57.169457 | orchestrator | Saturday 01 November 2025 15:22:48 +0000 (0:00:00.241) 0:00:07.123 ***** 2025-11-01 15:22:57.169468 | orchestrator | skipping: [testbed-node-0] 2025-11-01 15:22:57.169479 | orchestrator | 2025-11-01 15:22:57.169507 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-11-01 15:22:57.169518 | orchestrator | Saturday 01 November 2025 15:22:49 +0000 (0:00:00.258) 0:00:07.382 ***** 2025-11-01 15:22:57.169529 | orchestrator | ok: [testbed-node-0] 2025-11-01 15:22:57.169540 | orchestrator | 2025-11-01 15:22:57.169550 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-11-01 15:22:57.169570 | orchestrator | Saturday 01 November 2025 15:22:49 +0000 (0:00:00.136) 0:00:07.518 ***** 2025-11-01 15:22:57.169581 | orchestrator | changed: [testbed-node-0] 2025-11-01 15:22:57.169591 | orchestrator | 2025-11-01 15:22:57.169602 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-11-01 15:22:57.169613 | orchestrator | Saturday 01 November 2025 15:22:51 +0000 (0:00:02.093) 0:00:09.611 ***** 2025-11-01 15:22:57.169624 | orchestrator | ok: [testbed-node-0] 2025-11-01 15:22:57.169634 | orchestrator | 2025-11-01 15:22:57.169645 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-11-01 15:22:57.169656 | orchestrator | Saturday 01 November 2025 15:22:51 +0000 (0:00:00.463) 0:00:10.075 ***** 2025-11-01 15:22:57.169667 | orchestrator | ok: [testbed-node-0] 2025-11-01 15:22:57.169677 | orchestrator | 2025-11-01 15:22:57.169688 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-11-01 15:22:57.169699 | orchestrator | Saturday 01 November 2025 15:22:52 +0000 (0:00:00.331) 0:00:10.406 ***** 2025-11-01 15:22:57.169709 | orchestrator | skipping: [testbed-node-0] 2025-11-01 15:22:57.169720 | orchestrator | 2025-11-01 15:22:57.169731 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-11-01 15:22:57.169741 | orchestrator | Saturday 01 November 2025 15:22:52 +0000 (0:00:00.144) 0:00:10.550 ***** 2025-11-01 15:22:57.169752 | orchestrator | ok: [testbed-node-0] 2025-11-01 15:22:57.169763 | orchestrator | 2025-11-01 15:22:57.169774 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-11-01 15:22:57.169784 | orchestrator | Saturday 01 November 2025 15:22:52 +0000 (0:00:00.159) 0:00:10.710 ***** 2025-11-01 15:22:57.169795 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-01 15:22:57.169806 | orchestrator | 2025-11-01 15:22:57.169817 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-11-01 15:22:57.169828 | orchestrator | Saturday 01 November 2025 15:22:52 +0000 (0:00:00.254) 0:00:10.964 ***** 2025-11-01 15:22:57.169838 | orchestrator | skipping: [testbed-node-0] 2025-11-01 15:22:57.169849 | orchestrator | 2025-11-01 15:22:57.169859 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-11-01 15:22:57.169870 | orchestrator | Saturday 01 November 2025 15:22:52 +0000 (0:00:00.253) 0:00:11.217 ***** 2025-11-01 15:22:57.169881 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-01 15:22:57.169891 | orchestrator | 2025-11-01 15:22:57.169902 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-11-01 15:22:57.169913 | orchestrator | Saturday 01 November 2025 15:22:54 +0000 (0:00:01.261) 0:00:12.479 ***** 2025-11-01 15:22:57.169924 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-01 15:22:57.169934 | orchestrator | 2025-11-01 15:22:57.169945 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-11-01 15:22:57.169955 | orchestrator | Saturday 01 November 2025 15:22:54 +0000 (0:00:00.260) 0:00:12.740 ***** 2025-11-01 15:22:57.169966 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-01 15:22:57.169977 | orchestrator | 2025-11-01 15:22:57.169988 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 15:22:57.169998 | orchestrator | Saturday 01 November 2025 15:22:54 +0000 (0:00:00.285) 0:00:13.025 ***** 2025-11-01 15:22:57.170009 | orchestrator | 2025-11-01 15:22:57.170073 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 15:22:57.170084 | orchestrator | Saturday 01 November 2025 15:22:54 +0000 (0:00:00.082) 0:00:13.108 ***** 2025-11-01 15:22:57.170095 | orchestrator | 2025-11-01 15:22:57.170105 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 15:22:57.170116 | orchestrator | Saturday 01 November 2025 15:22:54 +0000 (0:00:00.074) 0:00:13.182 ***** 2025-11-01 15:22:57.170127 | orchestrator | 2025-11-01 15:22:57.170137 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-11-01 15:22:57.170148 | orchestrator | Saturday 01 November 2025 15:22:55 +0000 (0:00:00.321) 0:00:13.503 ***** 2025-11-01 15:22:57.170166 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-01 15:22:57.170176 | orchestrator | 2025-11-01 15:22:57.170187 | orchestrator | TASK [Print report file information] ******************************************* 2025-11-01 15:22:57.170198 | orchestrator | Saturday 01 November 2025 15:22:56 +0000 (0:00:01.384) 0:00:14.888 ***** 2025-11-01 15:22:57.170208 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-11-01 15:22:57.170219 | orchestrator |  "msg": [ 2025-11-01 15:22:57.170230 | orchestrator |  "Validator run completed.", 2025-11-01 15:22:57.170241 | orchestrator |  "You can find the report file here:", 2025-11-01 15:22:57.170252 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-11-01T15:22:42+00:00-report.json", 2025-11-01 15:22:57.170263 | orchestrator |  "on the following host:", 2025-11-01 15:22:57.170274 | orchestrator |  "testbed-manager" 2025-11-01 15:22:57.170285 | orchestrator |  ] 2025-11-01 15:22:57.170311 | orchestrator | } 2025-11-01 15:22:57.170323 | orchestrator | 2025-11-01 15:22:57.170334 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 15:22:57.170345 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-11-01 15:22:57.170358 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 15:22:57.170377 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 15:22:57.534080 | orchestrator | 2025-11-01 15:22:57.534141 | orchestrator | 2025-11-01 15:22:57.534154 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 15:22:57.534166 | orchestrator | Saturday 01 November 2025 15:22:57 +0000 (0:00:00.583) 0:00:15.472 ***** 2025-11-01 15:22:57.534177 | orchestrator | =============================================================================== 2025-11-01 15:22:57.534188 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.09s 2025-11-01 15:22:57.534199 | orchestrator | Write report file ------------------------------------------------------- 1.38s 2025-11-01 15:22:57.534209 | orchestrator | Aggregate test results step one ----------------------------------------- 1.26s 2025-11-01 15:22:57.534220 | orchestrator | Get container info ------------------------------------------------------ 1.08s 2025-11-01 15:22:57.534230 | orchestrator | Create report output directory ------------------------------------------ 1.05s 2025-11-01 15:22:57.534241 | orchestrator | Get timestamp for report file ------------------------------------------- 0.85s 2025-11-01 15:22:57.534268 | orchestrator | Print report file information ------------------------------------------- 0.58s 2025-11-01 15:22:57.534279 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.51s 2025-11-01 15:22:57.534290 | orchestrator | Set test result to passed if container is existing ---------------------- 0.50s 2025-11-01 15:22:57.534344 | orchestrator | Flush handlers ---------------------------------------------------------- 0.48s 2025-11-01 15:22:57.534355 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.46s 2025-11-01 15:22:57.534366 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.33s 2025-11-01 15:22:57.534377 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.31s 2025-11-01 15:22:57.534388 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2025-11-01 15:22:57.534398 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2025-11-01 15:22:57.534409 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2025-11-01 15:22:57.534420 | orchestrator | Aggregate test results step two ----------------------------------------- 0.30s 2025-11-01 15:22:57.534431 | orchestrator | Aggregate test results step three --------------------------------------- 0.29s 2025-11-01 15:22:57.534456 | orchestrator | Aggregate test results step three --------------------------------------- 0.28s 2025-11-01 15:22:57.534467 | orchestrator | Aggregate test results step two ----------------------------------------- 0.26s 2025-11-01 15:22:57.888101 | orchestrator | + osism validate ceph-osds 2025-11-01 15:23:20.070354 | orchestrator | 2025-11-01 15:23:20.070467 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-11-01 15:23:20.070484 | orchestrator | 2025-11-01 15:23:20.070496 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-11-01 15:23:20.070508 | orchestrator | Saturday 01 November 2025 15:23:15 +0000 (0:00:00.437) 0:00:00.437 ***** 2025-11-01 15:23:20.070520 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-01 15:23:20.070531 | orchestrator | 2025-11-01 15:23:20.070542 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-01 15:23:20.070571 | orchestrator | Saturday 01 November 2025 15:23:16 +0000 (0:00:00.883) 0:00:01.321 ***** 2025-11-01 15:23:20.070583 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-01 15:23:20.070594 | orchestrator | 2025-11-01 15:23:20.070605 | orchestrator | TASK [Create report output directory] ****************************************** 2025-11-01 15:23:20.070616 | orchestrator | Saturday 01 November 2025 15:23:16 +0000 (0:00:00.540) 0:00:01.862 ***** 2025-11-01 15:23:20.070627 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-01 15:23:20.070638 | orchestrator | 2025-11-01 15:23:20.070649 | orchestrator | TASK [Define report vars] ****************************************************** 2025-11-01 15:23:20.070660 | orchestrator | Saturday 01 November 2025 15:23:17 +0000 (0:00:00.756) 0:00:02.619 ***** 2025-11-01 15:23:20.070671 | orchestrator | ok: [testbed-node-3] 2025-11-01 15:23:20.070682 | orchestrator | 2025-11-01 15:23:20.070694 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-11-01 15:23:20.070705 | orchestrator | Saturday 01 November 2025 15:23:17 +0000 (0:00:00.153) 0:00:02.773 ***** 2025-11-01 15:23:20.070716 | orchestrator | skipping: [testbed-node-3] 2025-11-01 15:23:20.070727 | orchestrator | 2025-11-01 15:23:20.070738 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-11-01 15:23:20.070748 | orchestrator | Saturday 01 November 2025 15:23:17 +0000 (0:00:00.136) 0:00:02.910 ***** 2025-11-01 15:23:20.070759 | orchestrator | skipping: [testbed-node-3] 2025-11-01 15:23:20.070770 | orchestrator | skipping: [testbed-node-4] 2025-11-01 15:23:20.070781 | orchestrator | skipping: [testbed-node-5] 2025-11-01 15:23:20.070792 | orchestrator | 2025-11-01 15:23:20.070803 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-11-01 15:23:20.070813 | orchestrator | Saturday 01 November 2025 15:23:18 +0000 (0:00:00.343) 0:00:03.253 ***** 2025-11-01 15:23:20.070824 | orchestrator | ok: [testbed-node-3] 2025-11-01 15:23:20.070835 | orchestrator | 2025-11-01 15:23:20.070846 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-11-01 15:23:20.070857 | orchestrator | Saturday 01 November 2025 15:23:18 +0000 (0:00:00.151) 0:00:03.405 ***** 2025-11-01 15:23:20.070868 | orchestrator | ok: [testbed-node-3] 2025-11-01 15:23:20.070879 | orchestrator | ok: [testbed-node-4] 2025-11-01 15:23:20.070890 | orchestrator | ok: [testbed-node-5] 2025-11-01 15:23:20.070901 | orchestrator | 2025-11-01 15:23:20.070912 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-11-01 15:23:20.070923 | orchestrator | Saturday 01 November 2025 15:23:18 +0000 (0:00:00.319) 0:00:03.724 ***** 2025-11-01 15:23:20.070934 | orchestrator | ok: [testbed-node-3] 2025-11-01 15:23:20.070944 | orchestrator | 2025-11-01 15:23:20.070955 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-11-01 15:23:20.070966 | orchestrator | Saturday 01 November 2025 15:23:19 +0000 (0:00:00.814) 0:00:04.539 ***** 2025-11-01 15:23:20.070977 | orchestrator | ok: [testbed-node-3] 2025-11-01 15:23:20.070988 | orchestrator | ok: [testbed-node-4] 2025-11-01 15:23:20.070999 | orchestrator | ok: [testbed-node-5] 2025-11-01 15:23:20.071010 | orchestrator | 2025-11-01 15:23:20.071021 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-11-01 15:23:20.071055 | orchestrator | Saturday 01 November 2025 15:23:19 +0000 (0:00:00.349) 0:00:04.889 ***** 2025-11-01 15:23:20.071069 | orchestrator | skipping: [testbed-node-3] => (item={'id': '853582380c56d21b942b212f4e3427b30bea27cef972b48d579468fde8ad387e', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2025-11-01 15:23:20.071082 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7ce6daf69cf8387068971cbc5387fbe88ee846f020b5a489b41f96e4356a4695', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2025-11-01 15:23:20.071094 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2ad9af36ecc488b9e60be2389cf74531d7c78243b4ba02d07e43891c6fd24ece', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 50 minutes (healthy)'})  2025-11-01 15:23:20.071107 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4c45589a7eeed1ede0ae9e73efad04b2dca048c86e3ac7a4eda36433fdeaa106', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 54 minutes (healthy)'})  2025-11-01 15:23:20.071118 | orchestrator | skipping: [testbed-node-3] => (item={'id': '34d07e73e37fd844cc248f95b3a7b35993051ba0d6806be5234e7f184764c977', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 55 minutes (healthy)'})  2025-11-01 15:23:20.071146 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3fc68438eca4c391b9767865939dfa652debd57a5e17fdd8ce63e4bb97758f2c', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 57 minutes'})  2025-11-01 15:23:20.071164 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5d47f529b933521e438dbfa02d73af6401c4773b4bd5b933c2e58f94ee33f86e', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 57 minutes'})  2025-11-01 15:23:20.071180 | orchestrator | skipping: [testbed-node-3] => (item={'id': '82c1daf45bb05f6821e03d6c1b8d062288b76464557d21db7d4e511b9dc725b8', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 58 minutes'})  2025-11-01 15:23:20.071191 | orchestrator | skipping: [testbed-node-3] => (item={'id': '655c3c9f1bd21e42581332ad126f9d0309d1196a1f83dd2d551bd88ec7d02def', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 59 minutes (healthy)'})  2025-11-01 15:23:20.071203 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c8d73f66b79289f35195b663b2ed3136e1538f9790004a35b468d97df340a973', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2025-11-01 15:23:20.071214 | orchestrator | skipping: [testbed-node-3] => (item={'id': '88fe84dc9274f182d65d9e0dd5dc1ca6e2bc60669830373975c55eab414c7494', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2025-11-01 15:23:20.071225 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2dfe30723c7df10e0e4f4e2eabafede2c818dce6512ce14163c88836d34ac67c', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2025-11-01 15:23:20.071237 | orchestrator | ok: [testbed-node-3] => (item={'id': '0dbae067a55ba23d708a6bf763e730c83422f4e416fcb64843ef37af0392da85', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2025-11-01 15:23:20.071249 | orchestrator | ok: [testbed-node-3] => (item={'id': '03a688c1bc0581f4a5111f15bd3450bdd1c39b1d5029358f41d3053b4cad51bf', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2025-11-01 15:23:20.071268 | orchestrator | skipping: [testbed-node-3] => (item={'id': '00ba57df43ae424517aa139dcb46aa5cb41d48f3c62e7d6a8c015b3a2a586079', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2025-11-01 15:23:20.071280 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0e7d2b515bfbe223c8d84580eeb3251739c94592b8a977aa2ef81b9492dc4846', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2025-11-01 15:23:20.071294 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cef587c7029e18d42b9371314ab23051e5a442253587f3d38ffcdfeed763f477', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2025-11-01 15:23:20.071325 | orchestrator | skipping: [testbed-node-3] => (item={'id': '200408e2fa04e56e6caf02e32f2dc9059d2e9389f6968b07c7e53bcda9b08890', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up About an hour'})  2025-11-01 15:23:20.071337 | orchestrator | skipping: [testbed-node-3] => (item={'id': '12476365685ff589667bbbfc27c3963489446e8a79dd3c1e905cdcfb5327f4f0', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up About an hour'})  2025-11-01 15:23:20.071347 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b6ba601b2275614e91f81d21af1488936beb7dddbbf37f2d8d6239ff7b33c91f', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up About an hour'})  2025-11-01 15:23:20.071359 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3b25aa0e43e196e1fd5d074f168583da2bf4b04d118933d71688047489fc92a8', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2025-11-01 15:23:20.071377 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3050caece57471d69d82643e082f1702b7a4c4946b2299c59dcfe942491828a9', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2025-11-01 15:23:20.348214 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e4a91a6b831e4de5d794556af428cb1a616095a3e13e2e1015ac3821355bbb7d', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 50 minutes (healthy)'})  2025-11-01 15:23:20.348294 | orchestrator | skipping: [testbed-node-4] => (item={'id': '432bf2d923664b4a7c97b9069d7cd44236224c836a4b25e6de1e708ae4f1a289', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 54 minutes (healthy)'})  2025-11-01 15:23:20.348337 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5599415e60fe8bb05018eb366f202d9424ce39cb93f75def8609b83d197efeab', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 55 minutes (healthy)'})  2025-11-01 15:23:20.348349 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'db1a203188bf6c9968989c94d9e68466a8674c88a466ea9740015e46a19eced1', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 57 minutes'})  2025-11-01 15:23:20.348362 | orchestrator | skipping: [testbed-node-4] => (item={'id': '91367fe229bebb5749d4d7826dd8d6c7811a485deaaed1b79def5948d8552ceb', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 57 minutes'})  2025-11-01 15:23:20.348373 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5c1b0266733d3f03a6a885cca3b1e167125baed532fb6d7e086bd948f0ffb949', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 58 minutes'})  2025-11-01 15:23:20.348405 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a41fe22b89c726d475e287cc75631ba5fb357c22f2a0be1f9597e60b7dd6ede8', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 59 minutes (healthy)'})  2025-11-01 15:23:20.348416 | orchestrator | skipping: [testbed-node-4] => (item={'id': '69ebac985ce457e3554882a07be891cfb320f48c8200a4ac7993593bd5bffc9d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2025-11-01 15:23:20.348427 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c46cf0888fa1fdb9bd28aeea722ed0676630de580ba59cbf0c5da6b3f9945910', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2025-11-01 15:23:20.348438 | orchestrator | skipping: [testbed-node-4] => (item={'id': '270a16bab6bfdf62d55b0815eec5f511c920af90271c5872ab932a52c093e823', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2025-11-01 15:23:20.348468 | orchestrator | ok: [testbed-node-4] => (item={'id': 'fe6443812eb1805ebc17c244179037d8bbaf5a8c430db1139936ee81b2386789', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2025-11-01 15:23:20.348480 | orchestrator | ok: [testbed-node-4] => (item={'id': '7023946650c0cf88f7de202b4144e22a1fe9d52df0048f6b854d05889ecacd17', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2025-11-01 15:23:20.348491 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2ceed9050bd40163f7e10633362450dde55d66317ac21ad2539417f90c236707', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2025-11-01 15:23:20.348502 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8c51fb5f4972d34513e410afd23e9b7e9167c87e88aaaa338035559ffbb95d75', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2025-11-01 15:23:20.348514 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0884ee3535edbbbc0d82523bfc8f7666c86298b3d9caf779316fd6d01d343630', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2025-11-01 15:23:20.348541 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4f36f538c8242e4539786e2ae35bb11f061b6816b6fe5523017bf4af10d1cf83', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up About an hour'})  2025-11-01 15:23:20.348557 | orchestrator | skipping: [testbed-node-4] => (item={'id': '92b211835910f969dd0d8111f9243326037668da8e05cb541a55693f51b3b579', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up About an hour'})  2025-11-01 15:23:20.348569 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd3d843b0a0e48371e5490e4183da2380063c62f525e72e5e44b357611d51a682', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up About an hour'})  2025-11-01 15:23:20.348581 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'dd6e7208bdf56445e41f3609a90bffc4387d7e476b86a46e614a3d0a8cc7d72e', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2025-11-01 15:23:20.348593 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4c357d0c95cf1b27a8b4cd8a7d0502ef62f5c5619111e3f10b6d98e5791bb88d', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2025-11-01 15:23:20.348611 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3a574b00ff4102b128170d03b6f9d96ec454d5140aba0e0f65c0e25e05c6bdd2', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 50 minutes (healthy)'})  2025-11-01 15:23:20.348623 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5b8e11b8a4aef6120a564d4f0848e350e68c70f2a8aff8873c3a05c895385d43', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 54 minutes (healthy)'})  2025-11-01 15:23:20.348634 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fa658c09532c4a25f3d6ced536cd39d90d45fc1b98c7f967200337903c980d33', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 55 minutes (healthy)'})  2025-11-01 15:23:20.348645 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1eaa0955fd3d95ff08c5563510494f2c5dc959696f244580610920d6a4913f57', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 57 minutes'})  2025-11-01 15:23:20.348656 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8bdc22249aaf69d4194f4ee28afaf19487d5b7e561be80eb05509c11a5a28d75', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 57 minutes'})  2025-11-01 15:23:20.348667 | orchestrator | skipping: [testbed-node-5] => (item={'id': '56c9d36d0712600ce74f41d599dbae2f9f6b1eddd8b6064c8dddf2fdbb351dad', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 58 minutes'})  2025-11-01 15:23:20.348678 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9a99a5d8060477569ca3f6b5d834211b3557e3167398eff11031d739f119fe0b', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 59 minutes (healthy)'})  2025-11-01 15:23:20.348689 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f9c6134fcdeeafda091e5f67d1d20f77bd65774556a0f58ee9d65c907b3cea80', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2025-11-01 15:23:20.348700 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6401eb00b06794f42e9ce1fe3f896199930ed6a718271865d3763599d6777c8c', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2025-11-01 15:23:20.348711 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd33b5ced8e52e6a16b67dbb5cbd4cdfffe6a5ba3610bb7b9b0fcabc142d8da45', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2025-11-01 15:23:20.348727 | orchestrator | ok: [testbed-node-5] => (item={'id': '0180e3ac6c735e899fa2a999ef61634182ed35f50fe6d2fad8224c472c6d1e89', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2025-11-01 15:23:29.334775 | orchestrator | ok: [testbed-node-5] => (item={'id': 'ef796d4c400e345a2df2ad0d4279deb523b75d729b96166e14e5815ffc46d3aa', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2025-11-01 15:23:29.334888 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f3e60b00608a6c874a7fbc5cc3c6fd86c7f426e69391ba495a19a8ed802df36b', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2025-11-01 15:23:29.334930 | orchestrator | skipping: [testbed-node-5] => (item={'id': '25793c4d6b49ee9ae5f6d6230e589fb8c81526eb2f4e212b4bb710393afb8692', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2025-11-01 15:23:29.334944 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9f90f4819914fa08a5a0bccad57b6ac3ce4f6e25bc303d95faaad0a8c54373f5', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2025-11-01 15:23:29.334956 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c5a749906c9eaa481aaadee3d0474ac3e8a6f48b80d8c2b25f4fa75651d4a1e1', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up About an hour'})  2025-11-01 15:23:29.334968 | orchestrator | skipping: [testbed-node-5] => (item={'id': '086fe1c5161aad6d99ca7976cc707fda8297b04d89d646a1f70bcf8ed83b6242', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up About an hour'})  2025-11-01 15:23:29.334980 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c3abdae1e1c5f646b0ef4373f9a445d002f984c3f7ba4c53a56a8e699e4c5a90', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up About an hour'})  2025-11-01 15:23:29.334991 | orchestrator | 2025-11-01 15:23:29.335003 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-11-01 15:23:29.335015 | orchestrator | Saturday 01 November 2025 15:23:20 +0000 (0:00:00.574) 0:00:05.463 ***** 2025-11-01 15:23:29.335026 | orchestrator | ok: [testbed-node-3] 2025-11-01 15:23:29.335037 | orchestrator | ok: [testbed-node-4] 2025-11-01 15:23:29.335048 | orchestrator | ok: [testbed-node-5] 2025-11-01 15:23:29.335058 | orchestrator | 2025-11-01 15:23:29.335069 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-11-01 15:23:29.335080 | orchestrator | Saturday 01 November 2025 15:23:20 +0000 (0:00:00.323) 0:00:05.787 ***** 2025-11-01 15:23:29.335091 | orchestrator | skipping: [testbed-node-3] 2025-11-01 15:23:29.335103 | orchestrator | skipping: [testbed-node-4] 2025-11-01 15:23:29.335113 | orchestrator | skipping: [testbed-node-5] 2025-11-01 15:23:29.335124 | orchestrator | 2025-11-01 15:23:29.335134 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-11-01 15:23:29.335145 | orchestrator | Saturday 01 November 2025 15:23:21 +0000 (0:00:00.500) 0:00:06.287 ***** 2025-11-01 15:23:29.335155 | orchestrator | ok: [testbed-node-3] 2025-11-01 15:23:29.335166 | orchestrator | ok: [testbed-node-4] 2025-11-01 15:23:29.335176 | orchestrator | ok: [testbed-node-5] 2025-11-01 15:23:29.335187 | orchestrator | 2025-11-01 15:23:29.335198 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-11-01 15:23:29.335208 | orchestrator | Saturday 01 November 2025 15:23:21 +0000 (0:00:00.301) 0:00:06.588 ***** 2025-11-01 15:23:29.335219 | orchestrator | ok: [testbed-node-3] 2025-11-01 15:23:29.335229 | orchestrator | ok: [testbed-node-4] 2025-11-01 15:23:29.335240 | orchestrator | ok: [testbed-node-5] 2025-11-01 15:23:29.335250 | orchestrator | 2025-11-01 15:23:29.335261 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-11-01 15:23:29.335272 | orchestrator | Saturday 01 November 2025 15:23:21 +0000 (0:00:00.306) 0:00:06.895 ***** 2025-11-01 15:23:29.335283 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-11-01 15:23:29.335294 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-11-01 15:23:29.335336 | orchestrator | skipping: [testbed-node-3] 2025-11-01 15:23:29.335347 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-11-01 15:23:29.335358 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-11-01 15:23:29.335369 | orchestrator | skipping: [testbed-node-4] 2025-11-01 15:23:29.335388 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-11-01 15:23:29.335399 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-11-01 15:23:29.335410 | orchestrator | skipping: [testbed-node-5] 2025-11-01 15:23:29.335421 | orchestrator | 2025-11-01 15:23:29.335431 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-11-01 15:23:29.335443 | orchestrator | Saturday 01 November 2025 15:23:22 +0000 (0:00:00.309) 0:00:07.205 ***** 2025-11-01 15:23:29.335453 | orchestrator | ok: [testbed-node-3] 2025-11-01 15:23:29.335464 | orchestrator | ok: [testbed-node-4] 2025-11-01 15:23:29.335475 | orchestrator | ok: [testbed-node-5] 2025-11-01 15:23:29.335485 | orchestrator | 2025-11-01 15:23:29.335513 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-11-01 15:23:29.335525 | orchestrator | Saturday 01 November 2025 15:23:22 +0000 (0:00:00.510) 0:00:07.716 ***** 2025-11-01 15:23:29.335535 | orchestrator | skipping: [testbed-node-3] 2025-11-01 15:23:29.335546 | orchestrator | skipping: [testbed-node-4] 2025-11-01 15:23:29.335564 | orchestrator | skipping: [testbed-node-5] 2025-11-01 15:23:29.335575 | orchestrator | 2025-11-01 15:23:29.335586 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-11-01 15:23:29.335596 | orchestrator | Saturday 01 November 2025 15:23:22 +0000 (0:00:00.309) 0:00:08.025 ***** 2025-11-01 15:23:29.335607 | orchestrator | skipping: [testbed-node-3] 2025-11-01 15:23:29.335618 | orchestrator | skipping: [testbed-node-4] 2025-11-01 15:23:29.335628 | orchestrator | skipping: [testbed-node-5] 2025-11-01 15:23:29.335639 | orchestrator | 2025-11-01 15:23:29.335650 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-11-01 15:23:29.335660 | orchestrator | Saturday 01 November 2025 15:23:23 +0000 (0:00:00.318) 0:00:08.344 ***** 2025-11-01 15:23:29.335671 | orchestrator | ok: [testbed-node-3] 2025-11-01 15:23:29.335682 | orchestrator | ok: [testbed-node-4] 2025-11-01 15:23:29.335692 | orchestrator | ok: [testbed-node-5] 2025-11-01 15:23:29.335703 | orchestrator | 2025-11-01 15:23:29.335714 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-11-01 15:23:29.335725 | orchestrator | Saturday 01 November 2025 15:23:23 +0000 (0:00:00.331) 0:00:08.675 ***** 2025-11-01 15:23:29.335736 | orchestrator | skipping: [testbed-node-3] 2025-11-01 15:23:29.335746 | orchestrator | 2025-11-01 15:23:29.335758 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-11-01 15:23:29.335769 | orchestrator | Saturday 01 November 2025 15:23:24 +0000 (0:00:00.688) 0:00:09.364 ***** 2025-11-01 15:23:29.335779 | orchestrator | skipping: [testbed-node-3] 2025-11-01 15:23:29.335790 | orchestrator | 2025-11-01 15:23:29.335801 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-11-01 15:23:29.335812 | orchestrator | Saturday 01 November 2025 15:23:24 +0000 (0:00:00.271) 0:00:09.636 ***** 2025-11-01 15:23:29.335823 | orchestrator | skipping: [testbed-node-3] 2025-11-01 15:23:29.335833 | orchestrator | 2025-11-01 15:23:29.335844 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 15:23:29.335855 | orchestrator | Saturday 01 November 2025 15:23:24 +0000 (0:00:00.260) 0:00:09.896 ***** 2025-11-01 15:23:29.335865 | orchestrator | 2025-11-01 15:23:29.335876 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 15:23:29.335887 | orchestrator | Saturday 01 November 2025 15:23:24 +0000 (0:00:00.068) 0:00:09.965 ***** 2025-11-01 15:23:29.335898 | orchestrator | 2025-11-01 15:23:29.335908 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 15:23:29.335919 | orchestrator | Saturday 01 November 2025 15:23:24 +0000 (0:00:00.068) 0:00:10.034 ***** 2025-11-01 15:23:29.335930 | orchestrator | 2025-11-01 15:23:29.335940 | orchestrator | TASK [Print report file information] ******************************************* 2025-11-01 15:23:29.335951 | orchestrator | Saturday 01 November 2025 15:23:24 +0000 (0:00:00.074) 0:00:10.108 ***** 2025-11-01 15:23:29.335971 | orchestrator | skipping: [testbed-node-3] 2025-11-01 15:23:29.335981 | orchestrator | 2025-11-01 15:23:29.335992 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-11-01 15:23:29.336003 | orchestrator | Saturday 01 November 2025 15:23:25 +0000 (0:00:00.278) 0:00:10.386 ***** 2025-11-01 15:23:29.336014 | orchestrator | skipping: [testbed-node-3] 2025-11-01 15:23:29.336025 | orchestrator | 2025-11-01 15:23:29.336035 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-11-01 15:23:29.336046 | orchestrator | Saturday 01 November 2025 15:23:25 +0000 (0:00:00.264) 0:00:10.651 ***** 2025-11-01 15:23:29.336057 | orchestrator | ok: [testbed-node-3] 2025-11-01 15:23:29.336068 | orchestrator | ok: [testbed-node-4] 2025-11-01 15:23:29.336078 | orchestrator | ok: [testbed-node-5] 2025-11-01 15:23:29.336089 | orchestrator | 2025-11-01 15:23:29.336100 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-11-01 15:23:29.336111 | orchestrator | Saturday 01 November 2025 15:23:25 +0000 (0:00:00.298) 0:00:10.949 ***** 2025-11-01 15:23:29.336121 | orchestrator | ok: [testbed-node-3] 2025-11-01 15:23:29.336132 | orchestrator | 2025-11-01 15:23:29.336143 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-11-01 15:23:29.336154 | orchestrator | Saturday 01 November 2025 15:23:26 +0000 (0:00:00.713) 0:00:11.663 ***** 2025-11-01 15:23:29.336164 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-01 15:23:29.336175 | orchestrator | 2025-11-01 15:23:29.336186 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-11-01 15:23:29.336196 | orchestrator | Saturday 01 November 2025 15:23:28 +0000 (0:00:01.641) 0:00:13.305 ***** 2025-11-01 15:23:29.336207 | orchestrator | ok: [testbed-node-3] 2025-11-01 15:23:29.336218 | orchestrator | 2025-11-01 15:23:29.336229 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-11-01 15:23:29.336239 | orchestrator | Saturday 01 November 2025 15:23:28 +0000 (0:00:00.134) 0:00:13.439 ***** 2025-11-01 15:23:29.336250 | orchestrator | ok: [testbed-node-3] 2025-11-01 15:23:29.336261 | orchestrator | 2025-11-01 15:23:29.336272 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-11-01 15:23:29.336282 | orchestrator | Saturday 01 November 2025 15:23:28 +0000 (0:00:00.384) 0:00:13.824 ***** 2025-11-01 15:23:29.336293 | orchestrator | skipping: [testbed-node-3] 2025-11-01 15:23:29.336322 | orchestrator | 2025-11-01 15:23:29.336333 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-11-01 15:23:29.336344 | orchestrator | Saturday 01 November 2025 15:23:28 +0000 (0:00:00.158) 0:00:13.982 ***** 2025-11-01 15:23:29.336355 | orchestrator | ok: [testbed-node-3] 2025-11-01 15:23:29.336366 | orchestrator | 2025-11-01 15:23:29.336376 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-11-01 15:23:29.336387 | orchestrator | Saturday 01 November 2025 15:23:28 +0000 (0:00:00.147) 0:00:14.130 ***** 2025-11-01 15:23:29.336398 | orchestrator | ok: [testbed-node-3] 2025-11-01 15:23:29.336409 | orchestrator | ok: [testbed-node-4] 2025-11-01 15:23:29.336419 | orchestrator | ok: [testbed-node-5] 2025-11-01 15:23:29.336430 | orchestrator | 2025-11-01 15:23:29.336441 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-11-01 15:23:29.336459 | orchestrator | Saturday 01 November 2025 15:23:29 +0000 (0:00:00.332) 0:00:14.463 ***** 2025-11-01 15:23:42.679473 | orchestrator | changed: [testbed-node-5] 2025-11-01 15:23:42.679586 | orchestrator | changed: [testbed-node-4] 2025-11-01 15:23:42.679601 | orchestrator | changed: [testbed-node-3] 2025-11-01 15:23:42.679614 | orchestrator | 2025-11-01 15:23:42.679627 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-11-01 15:23:42.679686 | orchestrator | Saturday 01 November 2025 15:23:31 +0000 (0:00:02.501) 0:00:16.965 ***** 2025-11-01 15:23:42.679699 | orchestrator | ok: [testbed-node-3] 2025-11-01 15:23:42.679712 | orchestrator | ok: [testbed-node-4] 2025-11-01 15:23:42.679723 | orchestrator | ok: [testbed-node-5] 2025-11-01 15:23:42.679735 | orchestrator | 2025-11-01 15:23:42.679746 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-11-01 15:23:42.679776 | orchestrator | Saturday 01 November 2025 15:23:32 +0000 (0:00:00.321) 0:00:17.286 ***** 2025-11-01 15:23:42.679788 | orchestrator | ok: [testbed-node-3] 2025-11-01 15:23:42.679799 | orchestrator | ok: [testbed-node-4] 2025-11-01 15:23:42.679809 | orchestrator | ok: [testbed-node-5] 2025-11-01 15:23:42.679820 | orchestrator | 2025-11-01 15:23:42.679831 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-11-01 15:23:42.679843 | orchestrator | Saturday 01 November 2025 15:23:32 +0000 (0:00:00.508) 0:00:17.794 ***** 2025-11-01 15:23:42.679854 | orchestrator | skipping: [testbed-node-3] 2025-11-01 15:23:42.679864 | orchestrator | skipping: [testbed-node-4] 2025-11-01 15:23:42.679875 | orchestrator | skipping: [testbed-node-5] 2025-11-01 15:23:42.679887 | orchestrator | 2025-11-01 15:23:42.679898 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-11-01 15:23:42.679909 | orchestrator | Saturday 01 November 2025 15:23:33 +0000 (0:00:00.402) 0:00:18.197 ***** 2025-11-01 15:23:42.679920 | orchestrator | ok: [testbed-node-3] 2025-11-01 15:23:42.679931 | orchestrator | ok: [testbed-node-4] 2025-11-01 15:23:42.679942 | orchestrator | ok: [testbed-node-5] 2025-11-01 15:23:42.679952 | orchestrator | 2025-11-01 15:23:42.679963 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-11-01 15:23:42.679974 | orchestrator | Saturday 01 November 2025 15:23:33 +0000 (0:00:00.585) 0:00:18.783 ***** 2025-11-01 15:23:42.679985 | orchestrator | skipping: [testbed-node-3] 2025-11-01 15:23:42.679996 | orchestrator | skipping: [testbed-node-4] 2025-11-01 15:23:42.680007 | orchestrator | skipping: [testbed-node-5] 2025-11-01 15:23:42.680019 | orchestrator | 2025-11-01 15:23:42.680031 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-11-01 15:23:42.680044 | orchestrator | Saturday 01 November 2025 15:23:33 +0000 (0:00:00.319) 0:00:19.102 ***** 2025-11-01 15:23:42.680056 | orchestrator | skipping: [testbed-node-3] 2025-11-01 15:23:42.680069 | orchestrator | skipping: [testbed-node-4] 2025-11-01 15:23:42.680081 | orchestrator | skipping: [testbed-node-5] 2025-11-01 15:23:42.680094 | orchestrator | 2025-11-01 15:23:42.680106 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-11-01 15:23:42.680119 | orchestrator | Saturday 01 November 2025 15:23:34 +0000 (0:00:00.312) 0:00:19.415 ***** 2025-11-01 15:23:42.680131 | orchestrator | ok: [testbed-node-3] 2025-11-01 15:23:42.680144 | orchestrator | ok: [testbed-node-4] 2025-11-01 15:23:42.680156 | orchestrator | ok: [testbed-node-5] 2025-11-01 15:23:42.680168 | orchestrator | 2025-11-01 15:23:42.680180 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-11-01 15:23:42.680193 | orchestrator | Saturday 01 November 2025 15:23:34 +0000 (0:00:00.508) 0:00:19.923 ***** 2025-11-01 15:23:42.680205 | orchestrator | ok: [testbed-node-3] 2025-11-01 15:23:42.680217 | orchestrator | ok: [testbed-node-4] 2025-11-01 15:23:42.680229 | orchestrator | ok: [testbed-node-5] 2025-11-01 15:23:42.680241 | orchestrator | 2025-11-01 15:23:42.680253 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-11-01 15:23:42.680266 | orchestrator | Saturday 01 November 2025 15:23:35 +0000 (0:00:00.788) 0:00:20.712 ***** 2025-11-01 15:23:42.680278 | orchestrator | ok: [testbed-node-3] 2025-11-01 15:23:42.680290 | orchestrator | ok: [testbed-node-4] 2025-11-01 15:23:42.680331 | orchestrator | ok: [testbed-node-5] 2025-11-01 15:23:42.680343 | orchestrator | 2025-11-01 15:23:42.680356 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-11-01 15:23:42.680368 | orchestrator | Saturday 01 November 2025 15:23:35 +0000 (0:00:00.351) 0:00:21.064 ***** 2025-11-01 15:23:42.680380 | orchestrator | skipping: [testbed-node-3] 2025-11-01 15:23:42.680391 | orchestrator | skipping: [testbed-node-4] 2025-11-01 15:23:42.680402 | orchestrator | skipping: [testbed-node-5] 2025-11-01 15:23:42.680412 | orchestrator | 2025-11-01 15:23:42.680423 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-11-01 15:23:42.680434 | orchestrator | Saturday 01 November 2025 15:23:36 +0000 (0:00:00.315) 0:00:21.379 ***** 2025-11-01 15:23:42.680454 | orchestrator | ok: [testbed-node-3] 2025-11-01 15:23:42.680465 | orchestrator | ok: [testbed-node-4] 2025-11-01 15:23:42.680476 | orchestrator | ok: [testbed-node-5] 2025-11-01 15:23:42.680487 | orchestrator | 2025-11-01 15:23:42.680498 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-11-01 15:23:42.680509 | orchestrator | Saturday 01 November 2025 15:23:36 +0000 (0:00:00.654) 0:00:22.033 ***** 2025-11-01 15:23:42.680520 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-01 15:23:42.680532 | orchestrator | 2025-11-01 15:23:42.680543 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-11-01 15:23:42.680554 | orchestrator | Saturday 01 November 2025 15:23:37 +0000 (0:00:00.420) 0:00:22.454 ***** 2025-11-01 15:23:42.680565 | orchestrator | skipping: [testbed-node-3] 2025-11-01 15:23:42.680576 | orchestrator | 2025-11-01 15:23:42.680587 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-11-01 15:23:42.680598 | orchestrator | Saturday 01 November 2025 15:23:37 +0000 (0:00:00.265) 0:00:22.719 ***** 2025-11-01 15:23:42.680609 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-01 15:23:42.680620 | orchestrator | 2025-11-01 15:23:42.680631 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-11-01 15:23:42.680642 | orchestrator | Saturday 01 November 2025 15:23:39 +0000 (0:00:01.666) 0:00:24.386 ***** 2025-11-01 15:23:42.680653 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-01 15:23:42.680664 | orchestrator | 2025-11-01 15:23:42.680675 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-11-01 15:23:42.680686 | orchestrator | Saturday 01 November 2025 15:23:39 +0000 (0:00:00.266) 0:00:24.652 ***** 2025-11-01 15:23:42.680715 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-01 15:23:42.680728 | orchestrator | 2025-11-01 15:23:42.680739 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 15:23:42.680756 | orchestrator | Saturday 01 November 2025 15:23:39 +0000 (0:00:00.277) 0:00:24.930 ***** 2025-11-01 15:23:42.680767 | orchestrator | 2025-11-01 15:23:42.680779 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 15:23:42.680790 | orchestrator | Saturday 01 November 2025 15:23:39 +0000 (0:00:00.068) 0:00:24.999 ***** 2025-11-01 15:23:42.680802 | orchestrator | 2025-11-01 15:23:42.680814 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 15:23:42.680825 | orchestrator | Saturday 01 November 2025 15:23:39 +0000 (0:00:00.073) 0:00:25.073 ***** 2025-11-01 15:23:42.680836 | orchestrator | 2025-11-01 15:23:42.680848 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-11-01 15:23:42.680859 | orchestrator | Saturday 01 November 2025 15:23:40 +0000 (0:00:00.075) 0:00:25.148 ***** 2025-11-01 15:23:42.680871 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-01 15:23:42.680882 | orchestrator | 2025-11-01 15:23:42.680894 | orchestrator | TASK [Print report file information] ******************************************* 2025-11-01 15:23:42.680905 | orchestrator | Saturday 01 November 2025 15:23:41 +0000 (0:00:01.659) 0:00:26.807 ***** 2025-11-01 15:23:42.680917 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-11-01 15:23:42.680929 | orchestrator |  "msg": [ 2025-11-01 15:23:42.680940 | orchestrator |  "Validator run completed.", 2025-11-01 15:23:42.680952 | orchestrator |  "You can find the report file here:", 2025-11-01 15:23:42.680964 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-11-01T15:23:16+00:00-report.json", 2025-11-01 15:23:42.680976 | orchestrator |  "on the following host:", 2025-11-01 15:23:42.680988 | orchestrator |  "testbed-manager" 2025-11-01 15:23:42.680999 | orchestrator |  ] 2025-11-01 15:23:42.681011 | orchestrator | } 2025-11-01 15:23:42.681023 | orchestrator | 2025-11-01 15:23:42.681034 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 15:23:42.681054 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-11-01 15:23:42.681068 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-11-01 15:23:42.681080 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-11-01 15:23:42.681091 | orchestrator | 2025-11-01 15:23:42.681103 | orchestrator | 2025-11-01 15:23:42.681114 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 15:23:42.681126 | orchestrator | Saturday 01 November 2025 15:23:42 +0000 (0:00:00.638) 0:00:27.445 ***** 2025-11-01 15:23:42.681138 | orchestrator | =============================================================================== 2025-11-01 15:23:42.681149 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.50s 2025-11-01 15:23:42.681160 | orchestrator | Aggregate test results step one ----------------------------------------- 1.67s 2025-11-01 15:23:42.681172 | orchestrator | Write report file ------------------------------------------------------- 1.66s 2025-11-01 15:23:42.681183 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.64s 2025-11-01 15:23:42.681195 | orchestrator | Get timestamp for report file ------------------------------------------- 0.88s 2025-11-01 15:23:42.681206 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.81s 2025-11-01 15:23:42.681218 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.79s 2025-11-01 15:23:42.681229 | orchestrator | Create report output directory ------------------------------------------ 0.76s 2025-11-01 15:23:42.681241 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.71s 2025-11-01 15:23:42.681252 | orchestrator | Aggregate test results step one ----------------------------------------- 0.69s 2025-11-01 15:23:42.681264 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.65s 2025-11-01 15:23:42.681275 | orchestrator | Print report file information ------------------------------------------- 0.64s 2025-11-01 15:23:42.681286 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.59s 2025-11-01 15:23:42.681319 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.57s 2025-11-01 15:23:42.681330 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.54s 2025-11-01 15:23:42.681342 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.51s 2025-11-01 15:23:42.681353 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.51s 2025-11-01 15:23:42.681365 | orchestrator | Prepare test data ------------------------------------------------------- 0.51s 2025-11-01 15:23:42.681376 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.50s 2025-11-01 15:23:42.681388 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.42s 2025-11-01 15:23:42.993441 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-11-01 15:23:43.007610 | orchestrator | + set -e 2025-11-01 15:23:43.007639 | orchestrator | + source /opt/manager-vars.sh 2025-11-01 15:23:43.007650 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-11-01 15:23:43.008628 | orchestrator | ++ NUMBER_OF_NODES=6 2025-11-01 15:23:43.008648 | orchestrator | ++ export CEPH_VERSION=reef 2025-11-01 15:23:43.008658 | orchestrator | ++ CEPH_VERSION=reef 2025-11-01 15:23:43.008668 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-11-01 15:23:43.008752 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-11-01 15:23:43.008867 | orchestrator | ++ export MANAGER_VERSION=latest 2025-11-01 15:23:43.008877 | orchestrator | ++ MANAGER_VERSION=latest 2025-11-01 15:23:43.008886 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-11-01 15:23:43.008896 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-11-01 15:23:43.008906 | orchestrator | ++ export ARA=false 2025-11-01 15:23:43.008915 | orchestrator | ++ ARA=false 2025-11-01 15:23:43.008925 | orchestrator | ++ export DEPLOY_MODE=manager 2025-11-01 15:23:43.008934 | orchestrator | ++ DEPLOY_MODE=manager 2025-11-01 15:23:43.008971 | orchestrator | ++ export TEMPEST=false 2025-11-01 15:23:43.008981 | orchestrator | ++ TEMPEST=false 2025-11-01 15:23:43.008990 | orchestrator | ++ export IS_ZUUL=true 2025-11-01 15:23:43.009000 | orchestrator | ++ IS_ZUUL=true 2025-11-01 15:23:43.009009 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.208 2025-11-01 15:23:43.009019 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.208 2025-11-01 15:23:43.009029 | orchestrator | ++ export EXTERNAL_API=false 2025-11-01 15:23:43.009105 | orchestrator | ++ EXTERNAL_API=false 2025-11-01 15:23:43.009118 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-11-01 15:23:43.009128 | orchestrator | ++ IMAGE_USER=ubuntu 2025-11-01 15:23:43.009137 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-11-01 15:23:43.009147 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-11-01 15:23:43.009156 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-11-01 15:23:43.009166 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-11-01 15:23:43.009175 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-11-01 15:23:43.009185 | orchestrator | + source /etc/os-release 2025-11-01 15:23:43.009195 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2025-11-01 15:23:43.009204 | orchestrator | ++ NAME=Ubuntu 2025-11-01 15:23:43.009214 | orchestrator | ++ VERSION_ID=24.04 2025-11-01 15:23:43.009223 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2025-11-01 15:23:43.009240 | orchestrator | ++ VERSION_CODENAME=noble 2025-11-01 15:23:43.009250 | orchestrator | ++ ID=ubuntu 2025-11-01 15:23:43.009260 | orchestrator | ++ ID_LIKE=debian 2025-11-01 15:23:43.009269 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-11-01 15:23:43.009279 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-11-01 15:23:43.010995 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-11-01 15:23:43.011016 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-11-01 15:23:43.011026 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-11-01 15:23:43.011037 | orchestrator | ++ LOGO=ubuntu-logo 2025-11-01 15:23:43.011046 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-11-01 15:23:43.011056 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-11-01 15:23:43.011068 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-11-01 15:23:43.040900 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-11-01 15:24:08.061945 | orchestrator | 2025-11-01 15:24:08.062083 | orchestrator | # Status of Elasticsearch 2025-11-01 15:24:08.062096 | orchestrator | 2025-11-01 15:24:08.062106 | orchestrator | + pushd /opt/configuration/contrib 2025-11-01 15:24:08.062116 | orchestrator | + echo 2025-11-01 15:24:08.062124 | orchestrator | + echo '# Status of Elasticsearch' 2025-11-01 15:24:08.062132 | orchestrator | + echo 2025-11-01 15:24:08.062141 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-11-01 15:24:08.235190 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-11-01 15:24:08.236476 | orchestrator | 2025-11-01 15:24:08.236496 | orchestrator | # Status of MariaDB 2025-11-01 15:24:08.236506 | orchestrator | 2025-11-01 15:24:08.236514 | orchestrator | + echo 2025-11-01 15:24:08.236522 | orchestrator | + echo '# Status of MariaDB' 2025-11-01 15:24:08.236530 | orchestrator | + echo 2025-11-01 15:24:08.236538 | orchestrator | + MARIADB_USER=root_shard_0 2025-11-01 15:24:08.236546 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-11-01 15:24:08.322249 | orchestrator | Reading package lists... 2025-11-01 15:24:08.678610 | orchestrator | Building dependency tree... 2025-11-01 15:24:08.678955 | orchestrator | Reading state information... 2025-11-01 15:24:09.117940 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-11-01 15:24:09.118066 | orchestrator | bc set to manually installed. 2025-11-01 15:24:09.118083 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-11-01 15:24:09.859795 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-11-01 15:24:09.860634 | orchestrator | 2025-11-01 15:24:09.860719 | orchestrator | # Status of Prometheus 2025-11-01 15:24:09.860735 | orchestrator | 2025-11-01 15:24:09.860747 | orchestrator | + echo 2025-11-01 15:24:09.860758 | orchestrator | + echo '# Status of Prometheus' 2025-11-01 15:24:09.860800 | orchestrator | + echo 2025-11-01 15:24:09.860813 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-11-01 15:24:09.949888 | orchestrator | Unauthorized 2025-11-01 15:24:09.953706 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-11-01 15:24:10.019597 | orchestrator | Unauthorized 2025-11-01 15:24:10.023167 | orchestrator | 2025-11-01 15:24:10.023191 | orchestrator | # Status of RabbitMQ 2025-11-01 15:24:10.023203 | orchestrator | 2025-11-01 15:24:10.023214 | orchestrator | + echo 2025-11-01 15:24:10.023225 | orchestrator | + echo '# Status of RabbitMQ' 2025-11-01 15:24:10.023236 | orchestrator | + echo 2025-11-01 15:24:10.023248 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-11-01 15:24:10.541728 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-11-01 15:24:10.550610 | orchestrator | 2025-11-01 15:24:10.550641 | orchestrator | # Status of Redis 2025-11-01 15:24:10.550652 | orchestrator | 2025-11-01 15:24:10.550662 | orchestrator | + echo 2025-11-01 15:24:10.550672 | orchestrator | + echo '# Status of Redis' 2025-11-01 15:24:10.550683 | orchestrator | + echo 2025-11-01 15:24:10.550694 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-11-01 15:24:10.556719 | orchestrator | TCP OK - 0.001 second response time on 192.168.16.10 port 6379|time=0.001127s;;;0.000000;10.000000 2025-11-01 15:24:10.557118 | orchestrator | 2025-11-01 15:24:10.557137 | orchestrator | + popd 2025-11-01 15:24:10.557147 | orchestrator | + echo 2025-11-01 15:24:10.557157 | orchestrator | # Create backup of MariaDB database 2025-11-01 15:24:10.557168 | orchestrator | 2025-11-01 15:24:10.557178 | orchestrator | + echo '# Create backup of MariaDB database' 2025-11-01 15:24:10.557188 | orchestrator | + echo 2025-11-01 15:24:10.557197 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-11-01 15:24:12.592620 | orchestrator | 2025-11-01 15:24:12 | INFO  | Task 6b124e51-d685-47d8-a7e7-c13cdaa7541b (mariadb_backup) was prepared for execution. 2025-11-01 15:24:12.592710 | orchestrator | 2025-11-01 15:24:12 | INFO  | It takes a moment until task 6b124e51-d685-47d8-a7e7-c13cdaa7541b (mariadb_backup) has been started and output is visible here. 2025-11-01 15:24:41.704079 | orchestrator | 2025-11-01 15:24:41.704193 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 15:24:41.704210 | orchestrator | 2025-11-01 15:24:41.704222 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 15:24:41.704234 | orchestrator | Saturday 01 November 2025 15:24:16 +0000 (0:00:00.174) 0:00:00.174 ***** 2025-11-01 15:24:41.704245 | orchestrator | ok: [testbed-node-0] 2025-11-01 15:24:41.704257 | orchestrator | ok: [testbed-node-1] 2025-11-01 15:24:41.704268 | orchestrator | ok: [testbed-node-2] 2025-11-01 15:24:41.704373 | orchestrator | 2025-11-01 15:24:41.704386 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 15:24:41.704397 | orchestrator | Saturday 01 November 2025 15:24:17 +0000 (0:00:00.322) 0:00:00.496 ***** 2025-11-01 15:24:41.704408 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-11-01 15:24:41.704419 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-11-01 15:24:41.704430 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-11-01 15:24:41.704440 | orchestrator | 2025-11-01 15:24:41.704451 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-11-01 15:24:41.704462 | orchestrator | 2025-11-01 15:24:41.704473 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-11-01 15:24:41.704483 | orchestrator | Saturday 01 November 2025 15:24:17 +0000 (0:00:00.582) 0:00:01.079 ***** 2025-11-01 15:24:41.704495 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-11-01 15:24:41.704506 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-11-01 15:24:41.704517 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-11-01 15:24:41.704528 | orchestrator | 2025-11-01 15:24:41.704538 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-11-01 15:24:41.704574 | orchestrator | Saturday 01 November 2025 15:24:18 +0000 (0:00:00.439) 0:00:01.519 ***** 2025-11-01 15:24:41.704585 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 15:24:41.704597 | orchestrator | 2025-11-01 15:24:41.704608 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-11-01 15:24:41.704619 | orchestrator | Saturday 01 November 2025 15:24:18 +0000 (0:00:00.552) 0:00:02.071 ***** 2025-11-01 15:24:41.704632 | orchestrator | ok: [testbed-node-2] 2025-11-01 15:24:41.704644 | orchestrator | ok: [testbed-node-0] 2025-11-01 15:24:41.704656 | orchestrator | ok: [testbed-node-1] 2025-11-01 15:24:41.704668 | orchestrator | 2025-11-01 15:24:41.704680 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-11-01 15:24:41.704692 | orchestrator | Saturday 01 November 2025 15:24:22 +0000 (0:00:03.631) 0:00:05.703 ***** 2025-11-01 15:24:41.704704 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-11-01 15:24:41.704716 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-11-01 15:24:41.704729 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-11-01 15:24:41.704741 | orchestrator | mariadb_bootstrap_restart 2025-11-01 15:24:41.704754 | orchestrator | skipping: [testbed-node-1] 2025-11-01 15:24:41.704766 | orchestrator | skipping: [testbed-node-2] 2025-11-01 15:24:41.704777 | orchestrator | changed: [testbed-node-0] 2025-11-01 15:24:41.704789 | orchestrator | 2025-11-01 15:24:41.704802 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-11-01 15:24:41.704814 | orchestrator | skipping: no hosts matched 2025-11-01 15:24:41.704827 | orchestrator | 2025-11-01 15:24:41.704838 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-11-01 15:24:41.704850 | orchestrator | skipping: no hosts matched 2025-11-01 15:24:41.704866 | orchestrator | 2025-11-01 15:24:41.704885 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-11-01 15:24:41.704903 | orchestrator | skipping: no hosts matched 2025-11-01 15:24:41.704915 | orchestrator | 2025-11-01 15:24:41.704927 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-11-01 15:24:41.704939 | orchestrator | 2025-11-01 15:24:41.704951 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-11-01 15:24:41.704963 | orchestrator | Saturday 01 November 2025 15:24:40 +0000 (0:00:18.155) 0:00:23.858 ***** 2025-11-01 15:24:41.704975 | orchestrator | skipping: [testbed-node-0] 2025-11-01 15:24:41.704985 | orchestrator | skipping: [testbed-node-1] 2025-11-01 15:24:41.704996 | orchestrator | skipping: [testbed-node-2] 2025-11-01 15:24:41.705006 | orchestrator | 2025-11-01 15:24:41.705017 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-11-01 15:24:41.705028 | orchestrator | Saturday 01 November 2025 15:24:40 +0000 (0:00:00.300) 0:00:24.159 ***** 2025-11-01 15:24:41.705038 | orchestrator | skipping: [testbed-node-0] 2025-11-01 15:24:41.705048 | orchestrator | skipping: [testbed-node-1] 2025-11-01 15:24:41.705059 | orchestrator | skipping: [testbed-node-2] 2025-11-01 15:24:41.705069 | orchestrator | 2025-11-01 15:24:41.705080 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 15:24:41.705091 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 15:24:41.705103 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-01 15:24:41.705114 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-01 15:24:41.705125 | orchestrator | 2025-11-01 15:24:41.705135 | orchestrator | 2025-11-01 15:24:41.705146 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 15:24:41.705173 | orchestrator | Saturday 01 November 2025 15:24:41 +0000 (0:00:00.420) 0:00:24.579 ***** 2025-11-01 15:24:41.705192 | orchestrator | =============================================================================== 2025-11-01 15:24:41.705203 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 18.16s 2025-11-01 15:24:41.705231 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.63s 2025-11-01 15:24:41.705243 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2025-11-01 15:24:41.705254 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.55s 2025-11-01 15:24:41.705264 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.44s 2025-11-01 15:24:41.705275 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.42s 2025-11-01 15:24:41.705285 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-11-01 15:24:41.705301 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.30s 2025-11-01 15:24:42.052597 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-11-01 15:24:42.059027 | orchestrator | + set -e 2025-11-01 15:24:42.059061 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-11-01 15:24:42.059075 | orchestrator | ++ export INTERACTIVE=false 2025-11-01 15:24:42.059087 | orchestrator | ++ INTERACTIVE=false 2025-11-01 15:24:42.059098 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-11-01 15:24:42.059108 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-11-01 15:24:42.059119 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-11-01 15:24:42.059640 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-11-01 15:24:42.062582 | orchestrator | 2025-11-01 15:24:42.062605 | orchestrator | # OpenStack endpoints 2025-11-01 15:24:42.062616 | orchestrator | 2025-11-01 15:24:42.062628 | orchestrator | ++ export MANAGER_VERSION=latest 2025-11-01 15:24:42.062640 | orchestrator | ++ MANAGER_VERSION=latest 2025-11-01 15:24:42.062650 | orchestrator | + export OS_CLOUD=admin 2025-11-01 15:24:42.062661 | orchestrator | + OS_CLOUD=admin 2025-11-01 15:24:42.062672 | orchestrator | + echo 2025-11-01 15:24:42.062683 | orchestrator | + echo '# OpenStack endpoints' 2025-11-01 15:24:42.062694 | orchestrator | + echo 2025-11-01 15:24:42.062705 | orchestrator | + openstack endpoint list 2025-11-01 15:24:45.673625 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-11-01 15:24:45.673719 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-11-01 15:24:45.673731 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-11-01 15:24:45.673740 | orchestrator | | 00159fa4a03f45a78cd7c45074edc825 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-11-01 15:24:45.673749 | orchestrator | | 032b5b3fe4d94fd597b71730d4e1f1a2 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-11-01 15:24:45.673758 | orchestrator | | 08a90b7d1fcb4d8293f05c320502cf21 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-11-01 15:24:45.673767 | orchestrator | | 3faeb4d36026433e846d0d4df75ac8a7 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-11-01 15:24:45.673775 | orchestrator | | 40372058ec6e423a85026b2c5ce2d8f7 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-11-01 15:24:45.673784 | orchestrator | | 4364e1a2c3ff4fd1a2222263363bbfcd | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-11-01 15:24:45.673793 | orchestrator | | 545452d34b48425f8dd5b4349497ddd1 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-11-01 15:24:45.673821 | orchestrator | | 64cb997202844dd99040bd21f1682eaf | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-11-01 15:24:45.673831 | orchestrator | | 679d2883ad22475891b2e2e33aef97ea | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-11-01 15:24:45.673839 | orchestrator | | 6a300e757a85417093fd2888ac4eac1c | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-11-01 15:24:45.673848 | orchestrator | | 7b841e647dfb424da34f114e5304d3a7 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-11-01 15:24:45.673857 | orchestrator | | 7ff64448b1c949598bbb10a51d0fca27 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-11-01 15:24:45.673865 | orchestrator | | 90fb430bc62c437682582e05f1b5d0d6 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-11-01 15:24:45.673874 | orchestrator | | 9bd7f8402036453e81003ab9562208e4 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-11-01 15:24:45.673883 | orchestrator | | a9950ba9e0544fa282bd6fdd06d1f113 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-11-01 15:24:45.673898 | orchestrator | | b80fdc4b5f9648a8be8be97db9937497 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-11-01 15:24:45.673913 | orchestrator | | b9497ec465884aa481400fac1a61646c | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-11-01 15:24:45.673944 | orchestrator | | c116ffdd9d7f4e6f9c323f47ff68aa18 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-11-01 15:24:45.673960 | orchestrator | | dd534d2a747f4bf68a3d62c0bcfad929 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-11-01 15:24:45.673974 | orchestrator | | e4ec47fae41e48838dffd71f996cc049 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-11-01 15:24:45.674005 | orchestrator | | ea32d423bdcc40c3a55ea9ca698dca30 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-11-01 15:24:45.674086 | orchestrator | | eae329b6c21b4591bb434d08c2596112 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-11-01 15:24:45.674099 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-11-01 15:24:45.999022 | orchestrator | 2025-11-01 15:24:45.999107 | orchestrator | # Cinder 2025-11-01 15:24:45.999121 | orchestrator | 2025-11-01 15:24:45.999133 | orchestrator | + echo 2025-11-01 15:24:45.999144 | orchestrator | + echo '# Cinder' 2025-11-01 15:24:45.999156 | orchestrator | + echo 2025-11-01 15:24:45.999167 | orchestrator | + openstack volume service list 2025-11-01 15:24:48.803624 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-11-01 15:24:48.803729 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-11-01 15:24:48.803744 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-11-01 15:24:48.803780 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-11-01T15:24:45.000000 | 2025-11-01 15:24:48.803791 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-11-01T15:24:45.000000 | 2025-11-01 15:24:48.803802 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-11-01T15:24:45.000000 | 2025-11-01 15:24:48.803812 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-11-01T15:24:45.000000 | 2025-11-01 15:24:48.803823 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-11-01T15:24:45.000000 | 2025-11-01 15:24:48.803833 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-11-01T15:24:46.000000 | 2025-11-01 15:24:48.803844 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-11-01T15:24:45.000000 | 2025-11-01 15:24:48.803855 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-11-01T15:24:45.000000 | 2025-11-01 15:24:48.803865 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-11-01T15:24:45.000000 | 2025-11-01 15:24:48.803876 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-11-01 15:24:49.081652 | orchestrator | 2025-11-01 15:24:49.081736 | orchestrator | # Neutron 2025-11-01 15:24:49.081749 | orchestrator | 2025-11-01 15:24:49.081761 | orchestrator | + echo 2025-11-01 15:24:49.081772 | orchestrator | + echo '# Neutron' 2025-11-01 15:24:49.081784 | orchestrator | + echo 2025-11-01 15:24:49.081795 | orchestrator | + openstack network agent list 2025-11-01 15:24:51.853056 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-11-01 15:24:51.853158 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-11-01 15:24:51.853172 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-11-01 15:24:51.853184 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-11-01 15:24:51.853195 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-11-01 15:24:51.853206 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-11-01 15:24:51.853216 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-11-01 15:24:51.853227 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-11-01 15:24:51.853238 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-11-01 15:24:51.853266 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-11-01 15:24:51.853278 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-11-01 15:24:51.853289 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-11-01 15:24:51.853299 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-11-01 15:24:52.165736 | orchestrator | + openstack network service provider list 2025-11-01 15:24:54.758599 | orchestrator | +---------------+------+---------+ 2025-11-01 15:24:54.758692 | orchestrator | | Service Type | Name | Default | 2025-11-01 15:24:54.758704 | orchestrator | +---------------+------+---------+ 2025-11-01 15:24:54.758714 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-11-01 15:24:54.758724 | orchestrator | +---------------+------+---------+ 2025-11-01 15:24:55.058749 | orchestrator | 2025-11-01 15:24:55.058818 | orchestrator | # Nova 2025-11-01 15:24:55.058828 | orchestrator | 2025-11-01 15:24:55.058837 | orchestrator | + echo 2025-11-01 15:24:55.058844 | orchestrator | + echo '# Nova' 2025-11-01 15:24:55.058853 | orchestrator | + echo 2025-11-01 15:24:55.058861 | orchestrator | + openstack compute service list 2025-11-01 15:24:57.762917 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-11-01 15:24:57.763014 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-11-01 15:24:57.763029 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-11-01 15:24:57.763041 | orchestrator | | 818829b5-87fa-4e78-87b0-f1a85061bd8b | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-11-01T15:24:48.000000 | 2025-11-01 15:24:57.763053 | orchestrator | | 71ee1b51-3ff4-45b8-a610-3dc7236f33ba | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-11-01T15:24:52.000000 | 2025-11-01 15:24:57.763065 | orchestrator | | 37d89435-90fc-46c6-9931-d52dcd11ea1e | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-11-01T15:24:56.000000 | 2025-11-01 15:24:57.763076 | orchestrator | | 938fd3d9-1bc2-458f-8af2-775975cd67b5 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-11-01T15:24:55.000000 | 2025-11-01 15:24:57.763087 | orchestrator | | 726c3a90-407b-4003-81d1-b51cd108e5e3 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-11-01T15:24:57.000000 | 2025-11-01 15:24:57.763097 | orchestrator | | 65060d6b-ce4d-4145-abb9-49b6afa17712 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-11-01T15:24:57.000000 | 2025-11-01 15:24:57.763108 | orchestrator | | 958350e3-d747-49eb-99d0-f8d3c2dd7b54 | nova-compute | testbed-node-3 | nova | enabled | up | 2025-11-01T15:24:51.000000 | 2025-11-01 15:24:57.763119 | orchestrator | | e5985ae4-eaab-434b-83e6-bf2c9b6c540f | nova-compute | testbed-node-5 | nova | enabled | up | 2025-11-01T15:24:52.000000 | 2025-11-01 15:24:57.763130 | orchestrator | | 2dd9587a-555a-4ff0-adc1-7b0ae7e4d798 | nova-compute | testbed-node-4 | nova | enabled | up | 2025-11-01T15:24:52.000000 | 2025-11-01 15:24:57.763141 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-11-01 15:24:58.091118 | orchestrator | + openstack hypervisor list 2025-11-01 15:25:00.861388 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-11-01 15:25:00.861497 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-11-01 15:25:00.861513 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-11-01 15:25:00.861525 | orchestrator | | 5e68cbfa-4905-4e8b-9cba-774e617a521d | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-11-01 15:25:00.861535 | orchestrator | | 67b84949-49e3-4323-99ca-6425ba59866d | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-11-01 15:25:00.861546 | orchestrator | | 95cfd42c-2361-4ef3-9c31-de66ecb706ca | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-11-01 15:25:00.861557 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-11-01 15:25:01.129987 | orchestrator | 2025-11-01 15:25:01.130142 | orchestrator | # Run OpenStack test play 2025-11-01 15:25:01.130160 | orchestrator | 2025-11-01 15:25:01.130173 | orchestrator | + echo 2025-11-01 15:25:01.130185 | orchestrator | + echo '# Run OpenStack test play' 2025-11-01 15:25:01.130197 | orchestrator | + echo 2025-11-01 15:25:01.130208 | orchestrator | + osism apply --environment openstack test 2025-11-01 15:25:03.147785 | orchestrator | 2025-11-01 15:25:03 | INFO  | Trying to run play test in environment openstack 2025-11-01 15:25:13.265041 | orchestrator | 2025-11-01 15:25:13 | INFO  | Task d8defa3a-35a5-430e-a35e-a3a5b104f19f (test) was prepared for execution. 2025-11-01 15:25:13.265136 | orchestrator | 2025-11-01 15:25:13 | INFO  | It takes a moment until task d8defa3a-35a5-430e-a35e-a3a5b104f19f (test) has been started and output is visible here. 2025-11-01 15:32:22.253540 | orchestrator | 2025-11-01 15:32:22.253647 | orchestrator | PLAY [Create test project] ***************************************************** 2025-11-01 15:32:22.253662 | orchestrator | 2025-11-01 15:32:22.253671 | orchestrator | TASK [Create test domain] ****************************************************** 2025-11-01 15:32:22.253681 | orchestrator | Saturday 01 November 2025 15:25:17 +0000 (0:00:00.073) 0:00:00.073 ***** 2025-11-01 15:32:22.253690 | orchestrator | changed: [localhost] 2025-11-01 15:32:22.253699 | orchestrator | 2025-11-01 15:32:22.253723 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-11-01 15:32:22.253733 | orchestrator | Saturday 01 November 2025 15:25:21 +0000 (0:00:03.699) 0:00:03.772 ***** 2025-11-01 15:32:22.253741 | orchestrator | changed: [localhost] 2025-11-01 15:32:22.253750 | orchestrator | 2025-11-01 15:32:22.253758 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-11-01 15:32:22.253767 | orchestrator | Saturday 01 November 2025 15:25:25 +0000 (0:00:04.260) 0:00:08.033 ***** 2025-11-01 15:32:22.253776 | orchestrator | changed: [localhost] 2025-11-01 15:32:22.253784 | orchestrator | 2025-11-01 15:32:22.253793 | orchestrator | TASK [Create test project] ***************************************************** 2025-11-01 15:32:22.253802 | orchestrator | Saturday 01 November 2025 15:25:32 +0000 (0:00:06.655) 0:00:14.688 ***** 2025-11-01 15:32:22.253811 | orchestrator | changed: [localhost] 2025-11-01 15:32:22.253819 | orchestrator | 2025-11-01 15:32:22.253828 | orchestrator | TASK [Create test user] ******************************************************** 2025-11-01 15:32:22.253836 | orchestrator | Saturday 01 November 2025 15:25:36 +0000 (0:00:04.107) 0:00:18.796 ***** 2025-11-01 15:32:22.253845 | orchestrator | changed: [localhost] 2025-11-01 15:32:22.253853 | orchestrator | 2025-11-01 15:32:22.253862 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-11-01 15:32:22.253870 | orchestrator | Saturday 01 November 2025 15:25:40 +0000 (0:00:04.401) 0:00:23.197 ***** 2025-11-01 15:32:22.253879 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-11-01 15:32:22.253889 | orchestrator | changed: [localhost] => (item=member) 2025-11-01 15:32:22.253899 | orchestrator | changed: [localhost] => (item=creator) 2025-11-01 15:32:22.253908 | orchestrator | 2025-11-01 15:32:22.253917 | orchestrator | TASK [Create test server group] ************************************************ 2025-11-01 15:32:22.253925 | orchestrator | Saturday 01 November 2025 15:25:52 +0000 (0:00:11.778) 0:00:34.976 ***** 2025-11-01 15:32:22.253934 | orchestrator | changed: [localhost] 2025-11-01 15:32:22.253942 | orchestrator | 2025-11-01 15:32:22.253951 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-11-01 15:32:22.253959 | orchestrator | Saturday 01 November 2025 15:25:57 +0000 (0:00:04.672) 0:00:39.649 ***** 2025-11-01 15:32:22.253968 | orchestrator | changed: [localhost] 2025-11-01 15:32:22.253976 | orchestrator | 2025-11-01 15:32:22.253985 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-11-01 15:32:22.253994 | orchestrator | Saturday 01 November 2025 15:26:02 +0000 (0:00:05.301) 0:00:44.950 ***** 2025-11-01 15:32:22.254002 | orchestrator | changed: [localhost] 2025-11-01 15:32:22.254011 | orchestrator | 2025-11-01 15:32:22.254066 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-11-01 15:32:22.254076 | orchestrator | Saturday 01 November 2025 15:26:06 +0000 (0:00:04.430) 0:00:49.380 ***** 2025-11-01 15:32:22.254085 | orchestrator | changed: [localhost] 2025-11-01 15:32:22.254095 | orchestrator | 2025-11-01 15:32:22.254105 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-11-01 15:32:22.254114 | orchestrator | Saturday 01 November 2025 15:26:10 +0000 (0:00:04.000) 0:00:53.381 ***** 2025-11-01 15:32:22.254142 | orchestrator | changed: [localhost] 2025-11-01 15:32:22.254152 | orchestrator | 2025-11-01 15:32:22.254162 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-11-01 15:32:22.254171 | orchestrator | Saturday 01 November 2025 15:26:15 +0000 (0:00:04.108) 0:00:57.489 ***** 2025-11-01 15:32:22.254181 | orchestrator | changed: [localhost] 2025-11-01 15:32:22.254191 | orchestrator | 2025-11-01 15:32:22.254200 | orchestrator | TASK [Create test network topology] ******************************************** 2025-11-01 15:32:22.254210 | orchestrator | Saturday 01 November 2025 15:26:19 +0000 (0:00:04.586) 0:01:02.076 ***** 2025-11-01 15:32:22.254220 | orchestrator | changed: [localhost] 2025-11-01 15:32:22.254229 | orchestrator | 2025-11-01 15:32:22.254240 | orchestrator | TASK [Create test instances] *************************************************** 2025-11-01 15:32:22.254250 | orchestrator | Saturday 01 November 2025 15:26:36 +0000 (0:00:17.086) 0:01:19.163 ***** 2025-11-01 15:32:22.254295 | orchestrator | changed: [localhost] => (item=test) 2025-11-01 15:32:22.254307 | orchestrator | changed: [localhost] => (item=test-1) 2025-11-01 15:32:22.254316 | orchestrator | 2025-11-01 15:32:22.254326 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-11-01 15:32:22.254336 | orchestrator | 2025-11-01 15:32:22.254345 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-11-01 15:32:22.254355 | orchestrator | changed: [localhost] => (item=test-2) 2025-11-01 15:32:22.254364 | orchestrator | 2025-11-01 15:32:22.254373 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-11-01 15:32:22.254383 | orchestrator | changed: [localhost] => (item=test-3) 2025-11-01 15:32:22.254393 | orchestrator | 2025-11-01 15:32:22.254402 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-11-01 15:32:22.254412 | orchestrator | 2025-11-01 15:32:22.254420 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-11-01 15:32:22.254429 | orchestrator | changed: [localhost] => (item=test-4) 2025-11-01 15:32:22.254437 | orchestrator | 2025-11-01 15:32:22.254446 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-11-01 15:32:22.254469 | orchestrator | Saturday 01 November 2025 15:30:55 +0000 (0:04:18.437) 0:05:37.601 ***** 2025-11-01 15:32:22.254478 | orchestrator | changed: [localhost] => (item=test) 2025-11-01 15:32:22.254491 | orchestrator | changed: [localhost] => (item=test-1) 2025-11-01 15:32:22.254499 | orchestrator | changed: [localhost] => (item=test-2) 2025-11-01 15:32:22.254508 | orchestrator | changed: [localhost] => (item=test-3) 2025-11-01 15:32:22.254516 | orchestrator | changed: [localhost] => (item=test-4) 2025-11-01 15:32:22.254525 | orchestrator | 2025-11-01 15:32:22.254533 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-11-01 15:32:22.254557 | orchestrator | Saturday 01 November 2025 15:31:20 +0000 (0:00:24.865) 0:06:02.466 ***** 2025-11-01 15:32:22.254566 | orchestrator | changed: [localhost] => (item=test) 2025-11-01 15:32:22.254575 | orchestrator | changed: [localhost] => (item=test-1) 2025-11-01 15:32:22.254583 | orchestrator | changed: [localhost] => (item=test-2) 2025-11-01 15:32:22.254591 | orchestrator | changed: [localhost] => (item=test-3) 2025-11-01 15:32:22.254600 | orchestrator | changed: [localhost] => (item=test-4) 2025-11-01 15:32:22.254608 | orchestrator | 2025-11-01 15:32:22.254617 | orchestrator | TASK [Create test volume] ****************************************************** 2025-11-01 15:32:22.254625 | orchestrator | Saturday 01 November 2025 15:31:55 +0000 (0:00:35.036) 0:06:37.502 ***** 2025-11-01 15:32:22.254634 | orchestrator | changed: [localhost] 2025-11-01 15:32:22.254642 | orchestrator | 2025-11-01 15:32:22.254651 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-11-01 15:32:22.254659 | orchestrator | Saturday 01 November 2025 15:32:01 +0000 (0:00:06.727) 0:06:44.230 ***** 2025-11-01 15:32:22.254668 | orchestrator | changed: [localhost] 2025-11-01 15:32:22.254676 | orchestrator | 2025-11-01 15:32:22.254693 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-11-01 15:32:22.254702 | orchestrator | Saturday 01 November 2025 15:32:16 +0000 (0:00:14.532) 0:06:58.763 ***** 2025-11-01 15:32:22.254718 | orchestrator | ok: [localhost] 2025-11-01 15:32:22.254727 | orchestrator | 2025-11-01 15:32:22.254735 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-11-01 15:32:22.254744 | orchestrator | Saturday 01 November 2025 15:32:21 +0000 (0:00:05.576) 0:07:04.340 ***** 2025-11-01 15:32:22.254752 | orchestrator | ok: [localhost] => { 2025-11-01 15:32:22.254761 | orchestrator |  "msg": "192.168.112.200" 2025-11-01 15:32:22.254770 | orchestrator | } 2025-11-01 15:32:22.254778 | orchestrator | 2025-11-01 15:32:22.254787 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 15:32:22.254796 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 15:32:22.254805 | orchestrator | 2025-11-01 15:32:22.254814 | orchestrator | 2025-11-01 15:32:22.254823 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 15:32:22.254832 | orchestrator | Saturday 01 November 2025 15:32:21 +0000 (0:00:00.050) 0:07:04.390 ***** 2025-11-01 15:32:22.254840 | orchestrator | =============================================================================== 2025-11-01 15:32:22.254849 | orchestrator | Create test instances ------------------------------------------------- 258.44s 2025-11-01 15:32:22.254857 | orchestrator | Add tag to instances --------------------------------------------------- 35.04s 2025-11-01 15:32:22.254865 | orchestrator | Add metadata to instances ---------------------------------------------- 24.87s 2025-11-01 15:32:22.254874 | orchestrator | Create test network topology ------------------------------------------- 17.09s 2025-11-01 15:32:22.254882 | orchestrator | Attach test volume ----------------------------------------------------- 14.53s 2025-11-01 15:32:22.254891 | orchestrator | Add member roles to user test ------------------------------------------ 11.78s 2025-11-01 15:32:22.254899 | orchestrator | Create test volume ------------------------------------------------------ 6.73s 2025-11-01 15:32:22.254907 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.66s 2025-11-01 15:32:22.254916 | orchestrator | Create floating ip address ---------------------------------------------- 5.58s 2025-11-01 15:32:22.254924 | orchestrator | Create ssh security group ----------------------------------------------- 5.30s 2025-11-01 15:32:22.254933 | orchestrator | Create test server group ------------------------------------------------ 4.67s 2025-11-01 15:32:22.254941 | orchestrator | Create test keypair ----------------------------------------------------- 4.59s 2025-11-01 15:32:22.254950 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.43s 2025-11-01 15:32:22.254958 | orchestrator | Create test user -------------------------------------------------------- 4.40s 2025-11-01 15:32:22.254966 | orchestrator | Create test-admin user -------------------------------------------------- 4.26s 2025-11-01 15:32:22.254975 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.11s 2025-11-01 15:32:22.254983 | orchestrator | Create test project ----------------------------------------------------- 4.11s 2025-11-01 15:32:22.254991 | orchestrator | Create icmp security group ---------------------------------------------- 4.00s 2025-11-01 15:32:22.255000 | orchestrator | Create test domain ------------------------------------------------------ 3.70s 2025-11-01 15:32:22.255008 | orchestrator | Print floating ip address ----------------------------------------------- 0.05s 2025-11-01 15:32:22.605750 | orchestrator | + server_list 2025-11-01 15:32:22.605791 | orchestrator | + openstack --os-cloud test server list 2025-11-01 15:32:26.523540 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-11-01 15:32:26.523638 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-11-01 15:32:26.523652 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-11-01 15:32:26.523687 | orchestrator | | 9e966b38-70d0-481e-b898-1aa4832c5d08 | test-4 | ACTIVE | auto_allocated_network=10.42.0.12, 192.168.112.130 | N/A (booted from volume) | SCS-1L-1 | 2025-11-01 15:32:26.523700 | orchestrator | | 030d3113-1726-4b4e-a28e-d4fa32489cb1 | test-3 | ACTIVE | auto_allocated_network=10.42.0.35, 192.168.112.123 | N/A (booted from volume) | SCS-1L-1 | 2025-11-01 15:32:26.523711 | orchestrator | | cddbbbb3-b265-406d-ac82-421ab2caa036 | test-2 | ACTIVE | auto_allocated_network=10.42.0.37, 192.168.112.142 | N/A (booted from volume) | SCS-1L-1 | 2025-11-01 15:32:26.523722 | orchestrator | | 15652481-a585-45b8-aeac-1205b9ad1037 | test-1 | ACTIVE | auto_allocated_network=10.42.0.22, 192.168.112.115 | N/A (booted from volume) | SCS-1L-1 | 2025-11-01 15:32:26.523748 | orchestrator | | cab98295-ec13-4c97-8844-f7aa63f4f462 | test | ACTIVE | auto_allocated_network=10.42.0.46, 192.168.112.200 | N/A (booted from volume) | SCS-1L-1 | 2025-11-01 15:32:26.523760 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-11-01 15:32:26.803132 | orchestrator | + openstack --os-cloud test server show test 2025-11-01 15:32:30.698546 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-01 15:32:30.698659 | orchestrator | | Field | Value | 2025-11-01 15:32:30.698675 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-01 15:32:30.698687 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-11-01 15:32:30.698698 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-11-01 15:32:30.698708 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-11-01 15:32:30.698718 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-11-01 15:32:30.698744 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-11-01 15:32:30.698755 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-11-01 15:32:30.698786 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-11-01 15:32:30.698798 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-11-01 15:32:30.698808 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-11-01 15:32:30.698818 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-11-01 15:32:30.698828 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-11-01 15:32:30.698837 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-11-01 15:32:30.698847 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-11-01 15:32:30.698864 | orchestrator | | OS-EXT-STS:task_state | None | 2025-11-01 15:32:30.698874 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-11-01 15:32:30.698884 | orchestrator | | OS-SRV-USG:launched_at | 2025-11-01T15:27:20.000000 | 2025-11-01 15:32:30.698905 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-11-01 15:32:30.698916 | orchestrator | | accessIPv4 | | 2025-11-01 15:32:30.698925 | orchestrator | | accessIPv6 | | 2025-11-01 15:32:30.698935 | orchestrator | | addresses | auto_allocated_network=10.42.0.46, 192.168.112.200 | 2025-11-01 15:32:30.698945 | orchestrator | | config_drive | | 2025-11-01 15:32:30.698955 | orchestrator | | created | 2025-11-01T15:26:45Z | 2025-11-01 15:32:30.698965 | orchestrator | | description | None | 2025-11-01 15:32:30.698984 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-11-01 15:32:30.698994 | orchestrator | | hostId | 13d9de4ee445138039af865b14514fa51b7ecb1fb7c07e24b7295e93 | 2025-11-01 15:32:30.699004 | orchestrator | | host_status | None | 2025-11-01 15:32:30.699021 | orchestrator | | id | cab98295-ec13-4c97-8844-f7aa63f4f462 | 2025-11-01 15:32:30.699033 | orchestrator | | image | N/A (booted from volume) | 2025-11-01 15:32:30.699045 | orchestrator | | key_name | test | 2025-11-01 15:32:30.699056 | orchestrator | | locked | False | 2025-11-01 15:32:30.699067 | orchestrator | | locked_reason | None | 2025-11-01 15:32:30.699078 | orchestrator | | name | test | 2025-11-01 15:32:30.699095 | orchestrator | | pinned_availability_zone | None | 2025-11-01 15:32:30.699106 | orchestrator | | progress | 0 | 2025-11-01 15:32:30.699118 | orchestrator | | project_id | c210814c55b24ce58a2014c2fb1ebb7a | 2025-11-01 15:32:30.699138 | orchestrator | | properties | hostname='test' | 2025-11-01 15:32:30.699156 | orchestrator | | security_groups | name='ssh' | 2025-11-01 15:32:30.699168 | orchestrator | | | name='icmp' | 2025-11-01 15:32:30.699179 | orchestrator | | server_groups | None | 2025-11-01 15:32:30.699190 | orchestrator | | status | ACTIVE | 2025-11-01 15:32:30.699202 | orchestrator | | tags | test | 2025-11-01 15:32:30.699218 | orchestrator | | trusted_image_certificates | None | 2025-11-01 15:32:30.699230 | orchestrator | | updated | 2025-11-01T15:31:00Z | 2025-11-01 15:32:30.699242 | orchestrator | | user_id | d6c73582f3a94f6ca23b882b5791905b | 2025-11-01 15:32:30.699254 | orchestrator | | volumes_attached | delete_on_termination='True', id='5a891fb6-0219-4f12-8be4-cb546981e860' | 2025-11-01 15:32:30.699269 | orchestrator | | | delete_on_termination='False', id='98776f8f-0256-42d1-a814-6b29ceb0f70b' | 2025-11-01 15:32:30.699783 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-01 15:32:30.969358 | orchestrator | + openstack --os-cloud test server show test-1 2025-11-01 15:32:34.198248 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-01 15:32:34.198353 | orchestrator | | Field | Value | 2025-11-01 15:32:34.198369 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-01 15:32:34.198412 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-11-01 15:32:34.198424 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-11-01 15:32:34.198436 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-11-01 15:32:34.198447 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-11-01 15:32:34.198481 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-11-01 15:32:34.198507 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-11-01 15:32:34.198538 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-11-01 15:32:34.198551 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-11-01 15:32:34.198562 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-11-01 15:32:34.198582 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-11-01 15:32:34.198594 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-11-01 15:32:34.198605 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-11-01 15:32:34.198616 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-11-01 15:32:34.198627 | orchestrator | | OS-EXT-STS:task_state | None | 2025-11-01 15:32:34.198638 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-11-01 15:32:34.198653 | orchestrator | | OS-SRV-USG:launched_at | 2025-11-01T15:28:17.000000 | 2025-11-01 15:32:34.198672 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-11-01 15:32:34.198684 | orchestrator | | accessIPv4 | | 2025-11-01 15:32:34.198695 | orchestrator | | accessIPv6 | | 2025-11-01 15:32:34.198713 | orchestrator | | addresses | auto_allocated_network=10.42.0.22, 192.168.112.115 | 2025-11-01 15:32:34.198724 | orchestrator | | config_drive | | 2025-11-01 15:32:34.198735 | orchestrator | | created | 2025-11-01T15:27:39Z | 2025-11-01 15:32:34.198746 | orchestrator | | description | None | 2025-11-01 15:32:34.198757 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-11-01 15:32:34.198770 | orchestrator | | hostId | 0587ced5186780e123bd3e14cbea74f348a6469d2bcacbb4939c0ce8 | 2025-11-01 15:32:34.198786 | orchestrator | | host_status | None | 2025-11-01 15:32:34.198804 | orchestrator | | id | 15652481-a585-45b8-aeac-1205b9ad1037 | 2025-11-01 15:32:34.198816 | orchestrator | | image | N/A (booted from volume) | 2025-11-01 15:32:34.198834 | orchestrator | | key_name | test | 2025-11-01 15:32:34.198845 | orchestrator | | locked | False | 2025-11-01 15:32:34.198856 | orchestrator | | locked_reason | None | 2025-11-01 15:32:34.198867 | orchestrator | | name | test-1 | 2025-11-01 15:32:34.198878 | orchestrator | | pinned_availability_zone | None | 2025-11-01 15:32:34.198889 | orchestrator | | progress | 0 | 2025-11-01 15:32:34.198900 | orchestrator | | project_id | c210814c55b24ce58a2014c2fb1ebb7a | 2025-11-01 15:32:34.198911 | orchestrator | | properties | hostname='test-1' | 2025-11-01 15:32:34.198929 | orchestrator | | security_groups | name='ssh' | 2025-11-01 15:32:34.198952 | orchestrator | | | name='icmp' | 2025-11-01 15:32:34.198964 | orchestrator | | server_groups | None | 2025-11-01 15:32:34.198982 | orchestrator | | status | ACTIVE | 2025-11-01 15:32:34.198993 | orchestrator | | tags | test | 2025-11-01 15:32:34.199005 | orchestrator | | trusted_image_certificates | None | 2025-11-01 15:32:34.199016 | orchestrator | | updated | 2025-11-01T15:31:05Z | 2025-11-01 15:32:34.199027 | orchestrator | | user_id | d6c73582f3a94f6ca23b882b5791905b | 2025-11-01 15:32:34.199038 | orchestrator | | volumes_attached | delete_on_termination='True', id='2334cd0c-73e9-4242-92f6-00da5b68bf90' | 2025-11-01 15:32:34.201341 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-01 15:32:34.481558 | orchestrator | + openstack --os-cloud test server show test-2 2025-11-01 15:32:37.616420 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-01 15:32:37.616559 | orchestrator | | Field | Value | 2025-11-01 15:32:37.616576 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-01 15:32:37.616587 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-11-01 15:32:37.616599 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-11-01 15:32:37.616610 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-11-01 15:32:37.616621 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-11-01 15:32:37.616632 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-11-01 15:32:37.616643 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-11-01 15:32:37.616715 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-11-01 15:32:37.616729 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-11-01 15:32:37.616740 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-11-01 15:32:37.616751 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-11-01 15:32:37.616762 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-11-01 15:32:37.616773 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-11-01 15:32:37.616784 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-11-01 15:32:37.616795 | orchestrator | | OS-EXT-STS:task_state | None | 2025-11-01 15:32:37.616805 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-11-01 15:32:37.616821 | orchestrator | | OS-SRV-USG:launched_at | 2025-11-01T15:29:10.000000 | 2025-11-01 15:32:37.616855 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-11-01 15:32:37.616868 | orchestrator | | accessIPv4 | | 2025-11-01 15:32:37.616879 | orchestrator | | accessIPv6 | | 2025-11-01 15:32:37.616889 | orchestrator | | addresses | auto_allocated_network=10.42.0.37, 192.168.112.142 | 2025-11-01 15:32:37.616901 | orchestrator | | config_drive | | 2025-11-01 15:32:37.616912 | orchestrator | | created | 2025-11-01T15:28:36Z | 2025-11-01 15:32:37.616923 | orchestrator | | description | None | 2025-11-01 15:32:37.616933 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-11-01 15:32:37.616944 | orchestrator | | hostId | 0ecaefb190f607cd7fb7be270d15af692c04d99c058d4a286153fea0 | 2025-11-01 15:32:37.616966 | orchestrator | | host_status | None | 2025-11-01 15:32:37.616987 | orchestrator | | id | cddbbbb3-b265-406d-ac82-421ab2caa036 | 2025-11-01 15:32:37.616999 | orchestrator | | image | N/A (booted from volume) | 2025-11-01 15:32:37.617010 | orchestrator | | key_name | test | 2025-11-01 15:32:37.617021 | orchestrator | | locked | False | 2025-11-01 15:32:37.617032 | orchestrator | | locked_reason | None | 2025-11-01 15:32:37.617043 | orchestrator | | name | test-2 | 2025-11-01 15:32:37.617054 | orchestrator | | pinned_availability_zone | None | 2025-11-01 15:32:37.617065 | orchestrator | | progress | 0 | 2025-11-01 15:32:37.617086 | orchestrator | | project_id | c210814c55b24ce58a2014c2fb1ebb7a | 2025-11-01 15:32:37.617102 | orchestrator | | properties | hostname='test-2' | 2025-11-01 15:32:37.617122 | orchestrator | | security_groups | name='ssh' | 2025-11-01 15:32:37.617134 | orchestrator | | | name='icmp' | 2025-11-01 15:32:37.617145 | orchestrator | | server_groups | None | 2025-11-01 15:32:37.617156 | orchestrator | | status | ACTIVE | 2025-11-01 15:32:37.617167 | orchestrator | | tags | test | 2025-11-01 15:32:37.617177 | orchestrator | | trusted_image_certificates | None | 2025-11-01 15:32:37.617188 | orchestrator | | updated | 2025-11-01T15:31:10Z | 2025-11-01 15:32:37.617205 | orchestrator | | user_id | d6c73582f3a94f6ca23b882b5791905b | 2025-11-01 15:32:37.617217 | orchestrator | | volumes_attached | delete_on_termination='True', id='9cfb318c-99f7-47f2-a1a5-224dcfadaf29' | 2025-11-01 15:32:37.620687 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-01 15:32:37.912669 | orchestrator | + openstack --os-cloud test server show test-3 2025-11-01 15:32:41.431041 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-01 15:32:41.431140 | orchestrator | | Field | Value | 2025-11-01 15:32:41.431156 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-01 15:32:41.431169 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-11-01 15:32:41.431182 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-11-01 15:32:41.431212 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-11-01 15:32:41.431225 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-11-01 15:32:41.431256 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-11-01 15:32:41.431270 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-11-01 15:32:41.431305 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-11-01 15:32:41.431319 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-11-01 15:32:41.431331 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-11-01 15:32:41.431343 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-11-01 15:32:41.431354 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-11-01 15:32:41.431365 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-11-01 15:32:41.431376 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-11-01 15:32:41.431395 | orchestrator | | OS-EXT-STS:task_state | None | 2025-11-01 15:32:41.431406 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-11-01 15:32:41.431417 | orchestrator | | OS-SRV-USG:launched_at | 2025-11-01T15:29:57.000000 | 2025-11-01 15:32:41.431440 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-11-01 15:32:41.431453 | orchestrator | | accessIPv4 | | 2025-11-01 15:32:41.431495 | orchestrator | | accessIPv6 | | 2025-11-01 15:32:41.431507 | orchestrator | | addresses | auto_allocated_network=10.42.0.35, 192.168.112.123 | 2025-11-01 15:32:41.431518 | orchestrator | | config_drive | | 2025-11-01 15:32:41.431529 | orchestrator | | created | 2025-11-01T15:29:32Z | 2025-11-01 15:32:41.431548 | orchestrator | | description | None | 2025-11-01 15:32:41.431560 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-11-01 15:32:41.431571 | orchestrator | | hostId | 0587ced5186780e123bd3e14cbea74f348a6469d2bcacbb4939c0ce8 | 2025-11-01 15:32:41.431584 | orchestrator | | host_status | None | 2025-11-01 15:32:41.431609 | orchestrator | | id | 030d3113-1726-4b4e-a28e-d4fa32489cb1 | 2025-11-01 15:32:41.431622 | orchestrator | | image | N/A (booted from volume) | 2025-11-01 15:32:41.431635 | orchestrator | | key_name | test | 2025-11-01 15:32:41.431647 | orchestrator | | locked | False | 2025-11-01 15:32:41.431660 | orchestrator | | locked_reason | None | 2025-11-01 15:32:41.431679 | orchestrator | | name | test-3 | 2025-11-01 15:32:41.431692 | orchestrator | | pinned_availability_zone | None | 2025-11-01 15:32:41.431704 | orchestrator | | progress | 0 | 2025-11-01 15:32:41.431717 | orchestrator | | project_id | c210814c55b24ce58a2014c2fb1ebb7a | 2025-11-01 15:32:41.431729 | orchestrator | | properties | hostname='test-3' | 2025-11-01 15:32:41.431754 | orchestrator | | security_groups | name='ssh' | 2025-11-01 15:32:41.431767 | orchestrator | | | name='icmp' | 2025-11-01 15:32:41.431780 | orchestrator | | server_groups | None | 2025-11-01 15:32:41.431792 | orchestrator | | status | ACTIVE | 2025-11-01 15:32:41.431804 | orchestrator | | tags | test | 2025-11-01 15:32:41.431823 | orchestrator | | trusted_image_certificates | None | 2025-11-01 15:32:41.431835 | orchestrator | | updated | 2025-11-01T15:31:14Z | 2025-11-01 15:32:41.431848 | orchestrator | | user_id | d6c73582f3a94f6ca23b882b5791905b | 2025-11-01 15:32:41.431860 | orchestrator | | volumes_attached | delete_on_termination='True', id='e693a573-a66b-475b-b4ca-5097a2bca9ed' | 2025-11-01 15:32:41.436022 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-01 15:32:41.732313 | orchestrator | + openstack --os-cloud test server show test-4 2025-11-01 15:32:44.947592 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-01 15:32:44.947702 | orchestrator | | Field | Value | 2025-11-01 15:32:44.947717 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-01 15:32:44.947729 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-11-01 15:32:44.947763 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-11-01 15:32:44.947775 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-11-01 15:32:44.947787 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-11-01 15:32:44.947799 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-11-01 15:32:44.947811 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-11-01 15:32:44.947858 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-11-01 15:32:44.947872 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-11-01 15:32:44.947884 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-11-01 15:32:44.947896 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-11-01 15:32:44.947915 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-11-01 15:32:44.947927 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-11-01 15:32:44.947939 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-11-01 15:32:44.947951 | orchestrator | | OS-EXT-STS:task_state | None | 2025-11-01 15:32:44.947962 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-11-01 15:32:44.947974 | orchestrator | | OS-SRV-USG:launched_at | 2025-11-01T15:30:42.000000 | 2025-11-01 15:32:44.947993 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-11-01 15:32:44.948006 | orchestrator | | accessIPv4 | | 2025-11-01 15:32:44.948017 | orchestrator | | accessIPv6 | | 2025-11-01 15:32:44.948036 | orchestrator | | addresses | auto_allocated_network=10.42.0.12, 192.168.112.130 | 2025-11-01 15:32:44.948048 | orchestrator | | config_drive | | 2025-11-01 15:32:44.948488 | orchestrator | | created | 2025-11-01T15:30:16Z | 2025-11-01 15:32:44.948506 | orchestrator | | description | None | 2025-11-01 15:32:44.948522 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-11-01 15:32:44.948533 | orchestrator | | hostId | 13d9de4ee445138039af865b14514fa51b7ecb1fb7c07e24b7295e93 | 2025-11-01 15:32:44.948544 | orchestrator | | host_status | None | 2025-11-01 15:32:44.948564 | orchestrator | | id | 9e966b38-70d0-481e-b898-1aa4832c5d08 | 2025-11-01 15:32:44.948576 | orchestrator | | image | N/A (booted from volume) | 2025-11-01 15:32:44.948595 | orchestrator | | key_name | test | 2025-11-01 15:32:44.948607 | orchestrator | | locked | False | 2025-11-01 15:32:44.948618 | orchestrator | | locked_reason | None | 2025-11-01 15:32:44.948629 | orchestrator | | name | test-4 | 2025-11-01 15:32:44.948640 | orchestrator | | pinned_availability_zone | None | 2025-11-01 15:32:44.948655 | orchestrator | | progress | 0 | 2025-11-01 15:32:44.948667 | orchestrator | | project_id | c210814c55b24ce58a2014c2fb1ebb7a | 2025-11-01 15:32:44.948678 | orchestrator | | properties | hostname='test-4' | 2025-11-01 15:32:44.948696 | orchestrator | | security_groups | name='ssh' | 2025-11-01 15:32:44.948708 | orchestrator | | | name='icmp' | 2025-11-01 15:32:44.948726 | orchestrator | | server_groups | None | 2025-11-01 15:32:44.948737 | orchestrator | | status | ACTIVE | 2025-11-01 15:32:44.948748 | orchestrator | | tags | test | 2025-11-01 15:32:44.948759 | orchestrator | | trusted_image_certificates | None | 2025-11-01 15:32:44.948771 | orchestrator | | updated | 2025-11-01T15:31:19Z | 2025-11-01 15:32:44.948787 | orchestrator | | user_id | d6c73582f3a94f6ca23b882b5791905b | 2025-11-01 15:32:44.948798 | orchestrator | | volumes_attached | delete_on_termination='True', id='ec66efe7-b8d5-4217-a2c4-450b0bedbbb0' | 2025-11-01 15:32:44.953138 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-01 15:32:45.242196 | orchestrator | + server_ping 2025-11-01 15:32:45.243952 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-11-01 15:32:45.244290 | orchestrator | ++ tr -d '\r' 2025-11-01 15:32:48.273372 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 15:32:48.273514 | orchestrator | + ping -c3 192.168.112.123 2025-11-01 15:32:48.287089 | orchestrator | PING 192.168.112.123 (192.168.112.123) 56(84) bytes of data. 2025-11-01 15:32:48.287120 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=1 ttl=63 time=6.25 ms 2025-11-01 15:32:49.284979 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=2 ttl=63 time=2.26 ms 2025-11-01 15:32:50.285858 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=3 ttl=63 time=1.40 ms 2025-11-01 15:32:50.286458 | orchestrator | 2025-11-01 15:32:50.286509 | orchestrator | --- 192.168.112.123 ping statistics --- 2025-11-01 15:32:50.286520 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 15:32:50.286529 | orchestrator | rtt min/avg/max/mdev = 1.400/3.301/6.249/2.113 ms 2025-11-01 15:32:50.286539 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 15:32:50.286558 | orchestrator | + ping -c3 192.168.112.142 2025-11-01 15:32:50.301062 | orchestrator | PING 192.168.112.142 (192.168.112.142) 56(84) bytes of data. 2025-11-01 15:32:50.301088 | orchestrator | 64 bytes from 192.168.112.142: icmp_seq=1 ttl=63 time=9.65 ms 2025-11-01 15:32:51.296058 | orchestrator | 64 bytes from 192.168.112.142: icmp_seq=2 ttl=63 time=2.29 ms 2025-11-01 15:32:52.297528 | orchestrator | 64 bytes from 192.168.112.142: icmp_seq=3 ttl=63 time=1.89 ms 2025-11-01 15:32:52.297576 | orchestrator | 2025-11-01 15:32:52.297587 | orchestrator | --- 192.168.112.142 ping statistics --- 2025-11-01 15:32:52.297598 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-11-01 15:32:52.297607 | orchestrator | rtt min/avg/max/mdev = 1.889/4.608/9.648/3.567 ms 2025-11-01 15:32:52.298256 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 15:32:52.298275 | orchestrator | + ping -c3 192.168.112.200 2025-11-01 15:32:52.311252 | orchestrator | PING 192.168.112.200 (192.168.112.200) 56(84) bytes of data. 2025-11-01 15:32:52.311273 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=1 ttl=63 time=9.56 ms 2025-11-01 15:32:53.306269 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=2 ttl=63 time=2.15 ms 2025-11-01 15:32:54.307568 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=3 ttl=63 time=1.97 ms 2025-11-01 15:32:54.307661 | orchestrator | 2025-11-01 15:32:54.307676 | orchestrator | --- 192.168.112.200 ping statistics --- 2025-11-01 15:32:54.307688 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 15:32:54.307699 | orchestrator | rtt min/avg/max/mdev = 1.965/4.558/9.559/3.536 ms 2025-11-01 15:32:54.308447 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 15:32:54.308494 | orchestrator | + ping -c3 192.168.112.115 2025-11-01 15:32:54.318696 | orchestrator | PING 192.168.112.115 (192.168.112.115) 56(84) bytes of data. 2025-11-01 15:32:54.318720 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=1 ttl=63 time=6.78 ms 2025-11-01 15:32:55.315041 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=2 ttl=63 time=1.79 ms 2025-11-01 15:32:56.317441 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=3 ttl=63 time=1.69 ms 2025-11-01 15:32:56.317573 | orchestrator | 2025-11-01 15:32:56.317699 | orchestrator | --- 192.168.112.115 ping statistics --- 2025-11-01 15:32:56.317713 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 15:32:56.317722 | orchestrator | rtt min/avg/max/mdev = 1.687/3.419/6.779/2.376 ms 2025-11-01 15:32:56.317744 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 15:32:56.317754 | orchestrator | + ping -c3 192.168.112.130 2025-11-01 15:32:56.331689 | orchestrator | PING 192.168.112.130 (192.168.112.130) 56(84) bytes of data. 2025-11-01 15:32:56.331711 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=1 ttl=63 time=10.0 ms 2025-11-01 15:32:57.325919 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=2 ttl=63 time=2.78 ms 2025-11-01 15:32:58.328763 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=3 ttl=63 time=2.59 ms 2025-11-01 15:32:58.328855 | orchestrator | 2025-11-01 15:32:58.328869 | orchestrator | --- 192.168.112.130 ping statistics --- 2025-11-01 15:32:58.328908 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 15:32:58.328920 | orchestrator | rtt min/avg/max/mdev = 2.593/5.127/10.008/3.451 ms 2025-11-01 15:32:58.329308 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-11-01 15:32:58.329331 | orchestrator | + compute_list 2025-11-01 15:32:58.329343 | orchestrator | + osism manage compute list testbed-node-3 2025-11-01 15:33:02.204878 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 15:33:02.205006 | orchestrator | | ID | Name | Status | 2025-11-01 15:33:02.205023 | orchestrator | |--------------------------------------+--------+----------| 2025-11-01 15:33:02.205034 | orchestrator | | 030d3113-1726-4b4e-a28e-d4fa32489cb1 | test-3 | ACTIVE | 2025-11-01 15:33:02.205045 | orchestrator | | 15652481-a585-45b8-aeac-1205b9ad1037 | test-1 | ACTIVE | 2025-11-01 15:33:02.205057 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 15:33:02.575051 | orchestrator | + osism manage compute list testbed-node-4 2025-11-01 15:33:06.025010 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 15:33:06.025115 | orchestrator | | ID | Name | Status | 2025-11-01 15:33:06.025131 | orchestrator | |--------------------------------------+--------+----------| 2025-11-01 15:33:06.025143 | orchestrator | | cddbbbb3-b265-406d-ac82-421ab2caa036 | test-2 | ACTIVE | 2025-11-01 15:33:06.025154 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 15:33:06.424295 | orchestrator | + osism manage compute list testbed-node-5 2025-11-01 15:33:10.062378 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 15:33:10.062526 | orchestrator | | ID | Name | Status | 2025-11-01 15:33:10.062540 | orchestrator | |--------------------------------------+--------+----------| 2025-11-01 15:33:10.062551 | orchestrator | | 9e966b38-70d0-481e-b898-1aa4832c5d08 | test-4 | ACTIVE | 2025-11-01 15:33:10.062561 | orchestrator | | cab98295-ec13-4c97-8844-f7aa63f4f462 | test | ACTIVE | 2025-11-01 15:33:10.062571 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 15:33:10.403389 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2025-11-01 15:33:13.733225 | orchestrator | 2025-11-01 15:33:13 | INFO  | Live migrating server cddbbbb3-b265-406d-ac82-421ab2caa036 2025-11-01 15:33:26.740878 | orchestrator | 2025-11-01 15:33:26 | INFO  | Live migration of cddbbbb3-b265-406d-ac82-421ab2caa036 (test-2) is still in progress 2025-11-01 15:33:29.141997 | orchestrator | 2025-11-01 15:33:29 | INFO  | Live migration of cddbbbb3-b265-406d-ac82-421ab2caa036 (test-2) is still in progress 2025-11-01 15:33:31.589914 | orchestrator | 2025-11-01 15:33:31 | INFO  | Live migration of cddbbbb3-b265-406d-ac82-421ab2caa036 (test-2) is still in progress 2025-11-01 15:33:33.912191 | orchestrator | 2025-11-01 15:33:33 | INFO  | Live migration of cddbbbb3-b265-406d-ac82-421ab2caa036 (test-2) is still in progress 2025-11-01 15:33:36.221315 | orchestrator | 2025-11-01 15:33:36 | INFO  | Live migration of cddbbbb3-b265-406d-ac82-421ab2caa036 (test-2) is still in progress 2025-11-01 15:33:38.624868 | orchestrator | 2025-11-01 15:33:38 | INFO  | Live migration of cddbbbb3-b265-406d-ac82-421ab2caa036 (test-2) is still in progress 2025-11-01 15:33:40.922162 | orchestrator | 2025-11-01 15:33:40 | INFO  | Live migration of cddbbbb3-b265-406d-ac82-421ab2caa036 (test-2) is still in progress 2025-11-01 15:33:43.242785 | orchestrator | 2025-11-01 15:33:43 | INFO  | Live migration of cddbbbb3-b265-406d-ac82-421ab2caa036 (test-2) is still in progress 2025-11-01 15:33:45.614306 | orchestrator | 2025-11-01 15:33:45 | INFO  | Live migration of cddbbbb3-b265-406d-ac82-421ab2caa036 (test-2) is still in progress 2025-11-01 15:33:48.132993 | orchestrator | 2025-11-01 15:33:48 | INFO  | Live migration of cddbbbb3-b265-406d-ac82-421ab2caa036 (test-2) completed with status ACTIVE 2025-11-01 15:33:48.447981 | orchestrator | + compute_list 2025-11-01 15:33:48.448047 | orchestrator | + osism manage compute list testbed-node-3 2025-11-01 15:33:51.825298 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 15:33:51.825397 | orchestrator | | ID | Name | Status | 2025-11-01 15:33:51.825412 | orchestrator | |--------------------------------------+--------+----------| 2025-11-01 15:33:51.825425 | orchestrator | | 030d3113-1726-4b4e-a28e-d4fa32489cb1 | test-3 | ACTIVE | 2025-11-01 15:33:51.825436 | orchestrator | | cddbbbb3-b265-406d-ac82-421ab2caa036 | test-2 | ACTIVE | 2025-11-01 15:33:51.825447 | orchestrator | | 15652481-a585-45b8-aeac-1205b9ad1037 | test-1 | ACTIVE | 2025-11-01 15:33:51.825458 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 15:33:52.165721 | orchestrator | + osism manage compute list testbed-node-4 2025-11-01 15:33:55.209826 | orchestrator | +------+--------+----------+ 2025-11-01 15:33:55.209927 | orchestrator | | ID | Name | Status | 2025-11-01 15:33:55.209941 | orchestrator | |------+--------+----------| 2025-11-01 15:33:55.209952 | orchestrator | +------+--------+----------+ 2025-11-01 15:33:55.589136 | orchestrator | + osism manage compute list testbed-node-5 2025-11-01 15:33:58.886943 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 15:33:58.887047 | orchestrator | | ID | Name | Status | 2025-11-01 15:33:58.887062 | orchestrator | |--------------------------------------+--------+----------| 2025-11-01 15:33:58.887074 | orchestrator | | 9e966b38-70d0-481e-b898-1aa4832c5d08 | test-4 | ACTIVE | 2025-11-01 15:33:58.887085 | orchestrator | | cab98295-ec13-4c97-8844-f7aa63f4f462 | test | ACTIVE | 2025-11-01 15:33:58.887095 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 15:33:59.390287 | orchestrator | + server_ping 2025-11-01 15:33:59.391200 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-11-01 15:33:59.391231 | orchestrator | ++ tr -d '\r' 2025-11-01 15:34:02.399374 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 15:34:02.399472 | orchestrator | + ping -c3 192.168.112.123 2025-11-01 15:34:02.408518 | orchestrator | PING 192.168.112.123 (192.168.112.123) 56(84) bytes of data. 2025-11-01 15:34:02.408567 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=1 ttl=63 time=7.61 ms 2025-11-01 15:34:03.405188 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=2 ttl=63 time=1.94 ms 2025-11-01 15:34:04.406968 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=3 ttl=63 time=1.89 ms 2025-11-01 15:34:04.407821 | orchestrator | 2025-11-01 15:34:04.407857 | orchestrator | --- 192.168.112.123 ping statistics --- 2025-11-01 15:34:04.407870 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 15:34:04.407881 | orchestrator | rtt min/avg/max/mdev = 1.888/3.811/7.606/2.683 ms 2025-11-01 15:34:04.407907 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 15:34:04.407919 | orchestrator | + ping -c3 192.168.112.142 2025-11-01 15:34:04.421847 | orchestrator | PING 192.168.112.142 (192.168.112.142) 56(84) bytes of data. 2025-11-01 15:34:04.421899 | orchestrator | 64 bytes from 192.168.112.142: icmp_seq=1 ttl=63 time=8.77 ms 2025-11-01 15:34:05.415670 | orchestrator | 64 bytes from 192.168.112.142: icmp_seq=2 ttl=63 time=1.82 ms 2025-11-01 15:34:06.417622 | orchestrator | 64 bytes from 192.168.112.142: icmp_seq=3 ttl=63 time=2.21 ms 2025-11-01 15:34:06.417716 | orchestrator | 2025-11-01 15:34:06.417731 | orchestrator | --- 192.168.112.142 ping statistics --- 2025-11-01 15:34:06.417744 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-11-01 15:34:06.417755 | orchestrator | rtt min/avg/max/mdev = 1.818/4.267/8.772/3.189 ms 2025-11-01 15:34:06.417766 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 15:34:06.417778 | orchestrator | + ping -c3 192.168.112.200 2025-11-01 15:34:06.429196 | orchestrator | PING 192.168.112.200 (192.168.112.200) 56(84) bytes of data. 2025-11-01 15:34:06.429221 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=1 ttl=63 time=7.11 ms 2025-11-01 15:34:07.426164 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=2 ttl=63 time=2.20 ms 2025-11-01 15:34:08.427946 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=3 ttl=63 time=1.91 ms 2025-11-01 15:34:08.428042 | orchestrator | 2025-11-01 15:34:08.428084 | orchestrator | --- 192.168.112.200 ping statistics --- 2025-11-01 15:34:08.428097 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 15:34:08.428108 | orchestrator | rtt min/avg/max/mdev = 1.906/3.738/7.112/2.388 ms 2025-11-01 15:34:08.428825 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 15:34:08.428850 | orchestrator | + ping -c3 192.168.112.115 2025-11-01 15:34:08.438372 | orchestrator | PING 192.168.112.115 (192.168.112.115) 56(84) bytes of data. 2025-11-01 15:34:08.438398 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=1 ttl=63 time=6.01 ms 2025-11-01 15:34:09.436358 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=2 ttl=63 time=2.12 ms 2025-11-01 15:34:10.436142 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=3 ttl=63 time=1.25 ms 2025-11-01 15:34:10.436210 | orchestrator | 2025-11-01 15:34:10.436217 | orchestrator | --- 192.168.112.115 ping statistics --- 2025-11-01 15:34:10.436224 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-11-01 15:34:10.436229 | orchestrator | rtt min/avg/max/mdev = 1.254/3.126/6.009/2.068 ms 2025-11-01 15:34:10.436838 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 15:34:10.436850 | orchestrator | + ping -c3 192.168.112.130 2025-11-01 15:34:10.451518 | orchestrator | PING 192.168.112.130 (192.168.112.130) 56(84) bytes of data. 2025-11-01 15:34:10.451531 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=1 ttl=63 time=12.4 ms 2025-11-01 15:34:11.444014 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=2 ttl=63 time=2.32 ms 2025-11-01 15:34:12.446105 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=3 ttl=63 time=1.84 ms 2025-11-01 15:34:12.446201 | orchestrator | 2025-11-01 15:34:12.446217 | orchestrator | --- 192.168.112.130 ping statistics --- 2025-11-01 15:34:12.446230 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-11-01 15:34:12.446242 | orchestrator | rtt min/avg/max/mdev = 1.835/5.519/12.400/4.869 ms 2025-11-01 15:34:12.446254 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2025-11-01 15:34:16.095944 | orchestrator | 2025-11-01 15:34:16 | INFO  | Live migrating server 9e966b38-70d0-481e-b898-1aa4832c5d08 2025-11-01 15:34:29.659079 | orchestrator | 2025-11-01 15:34:29 | INFO  | Live migration of 9e966b38-70d0-481e-b898-1aa4832c5d08 (test-4) is still in progress 2025-11-01 15:34:32.068753 | orchestrator | 2025-11-01 15:34:32 | INFO  | Live migration of 9e966b38-70d0-481e-b898-1aa4832c5d08 (test-4) is still in progress 2025-11-01 15:34:34.446180 | orchestrator | 2025-11-01 15:34:34 | INFO  | Live migration of 9e966b38-70d0-481e-b898-1aa4832c5d08 (test-4) is still in progress 2025-11-01 15:34:36.788687 | orchestrator | 2025-11-01 15:34:36 | INFO  | Live migration of 9e966b38-70d0-481e-b898-1aa4832c5d08 (test-4) is still in progress 2025-11-01 15:34:39.141635 | orchestrator | 2025-11-01 15:34:39 | INFO  | Live migration of 9e966b38-70d0-481e-b898-1aa4832c5d08 (test-4) is still in progress 2025-11-01 15:34:41.478250 | orchestrator | 2025-11-01 15:34:41 | INFO  | Live migration of 9e966b38-70d0-481e-b898-1aa4832c5d08 (test-4) is still in progress 2025-11-01 15:34:43.815316 | orchestrator | 2025-11-01 15:34:43 | INFO  | Live migration of 9e966b38-70d0-481e-b898-1aa4832c5d08 (test-4) is still in progress 2025-11-01 15:34:46.122615 | orchestrator | 2025-11-01 15:34:46 | INFO  | Live migration of 9e966b38-70d0-481e-b898-1aa4832c5d08 (test-4) is still in progress 2025-11-01 15:34:48.437319 | orchestrator | 2025-11-01 15:34:48 | INFO  | Live migration of 9e966b38-70d0-481e-b898-1aa4832c5d08 (test-4) is still in progress 2025-11-01 15:34:50.750427 | orchestrator | 2025-11-01 15:34:50 | INFO  | Live migration of 9e966b38-70d0-481e-b898-1aa4832c5d08 (test-4) completed with status ACTIVE 2025-11-01 15:34:50.750560 | orchestrator | 2025-11-01 15:34:50 | INFO  | Live migrating server cab98295-ec13-4c97-8844-f7aa63f4f462 2025-11-01 15:35:03.234827 | orchestrator | 2025-11-01 15:35:03 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) is still in progress 2025-11-01 15:35:05.614133 | orchestrator | 2025-11-01 15:35:05 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) is still in progress 2025-11-01 15:35:07.994901 | orchestrator | 2025-11-01 15:35:07 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) is still in progress 2025-11-01 15:35:10.437759 | orchestrator | 2025-11-01 15:35:10 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) is still in progress 2025-11-01 15:35:12.821016 | orchestrator | 2025-11-01 15:35:12 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) is still in progress 2025-11-01 15:35:15.274102 | orchestrator | 2025-11-01 15:35:15 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) is still in progress 2025-11-01 15:35:17.609811 | orchestrator | 2025-11-01 15:35:17 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) is still in progress 2025-11-01 15:35:19.925274 | orchestrator | 2025-11-01 15:35:19 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) is still in progress 2025-11-01 15:35:22.288284 | orchestrator | 2025-11-01 15:35:22 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) is still in progress 2025-11-01 15:35:24.651369 | orchestrator | 2025-11-01 15:35:24 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) is still in progress 2025-11-01 15:35:26.952546 | orchestrator | 2025-11-01 15:35:26 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) completed with status ACTIVE 2025-11-01 15:35:27.351141 | orchestrator | + compute_list 2025-11-01 15:35:27.351233 | orchestrator | + osism manage compute list testbed-node-3 2025-11-01 15:35:31.033680 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 15:35:31.033789 | orchestrator | | ID | Name | Status | 2025-11-01 15:35:31.033805 | orchestrator | |--------------------------------------+--------+----------| 2025-11-01 15:35:31.033817 | orchestrator | | 9e966b38-70d0-481e-b898-1aa4832c5d08 | test-4 | ACTIVE | 2025-11-01 15:35:31.033828 | orchestrator | | 030d3113-1726-4b4e-a28e-d4fa32489cb1 | test-3 | ACTIVE | 2025-11-01 15:35:31.033839 | orchestrator | | cddbbbb3-b265-406d-ac82-421ab2caa036 | test-2 | ACTIVE | 2025-11-01 15:35:31.033851 | orchestrator | | 15652481-a585-45b8-aeac-1205b9ad1037 | test-1 | ACTIVE | 2025-11-01 15:35:31.033862 | orchestrator | | cab98295-ec13-4c97-8844-f7aa63f4f462 | test | ACTIVE | 2025-11-01 15:35:31.033873 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 15:35:31.399952 | orchestrator | + osism manage compute list testbed-node-4 2025-11-01 15:35:34.305020 | orchestrator | +------+--------+----------+ 2025-11-01 15:35:34.305113 | orchestrator | | ID | Name | Status | 2025-11-01 15:35:34.305125 | orchestrator | |------+--------+----------| 2025-11-01 15:35:34.305135 | orchestrator | +------+--------+----------+ 2025-11-01 15:35:34.662235 | orchestrator | + osism manage compute list testbed-node-5 2025-11-01 15:35:37.705418 | orchestrator | +------+--------+----------+ 2025-11-01 15:35:37.705574 | orchestrator | | ID | Name | Status | 2025-11-01 15:35:37.705590 | orchestrator | |------+--------+----------| 2025-11-01 15:35:37.705601 | orchestrator | +------+--------+----------+ 2025-11-01 15:35:38.167747 | orchestrator | + server_ping 2025-11-01 15:35:38.168411 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-11-01 15:35:38.169139 | orchestrator | ++ tr -d '\r' 2025-11-01 15:35:41.370402 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 15:35:41.370482 | orchestrator | + ping -c3 192.168.112.123 2025-11-01 15:35:41.380248 | orchestrator | PING 192.168.112.123 (192.168.112.123) 56(84) bytes of data. 2025-11-01 15:35:41.380274 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=1 ttl=63 time=6.89 ms 2025-11-01 15:35:42.377003 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=2 ttl=63 time=2.01 ms 2025-11-01 15:35:43.378680 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=3 ttl=63 time=1.79 ms 2025-11-01 15:35:43.379618 | orchestrator | 2025-11-01 15:35:43.379652 | orchestrator | --- 192.168.112.123 ping statistics --- 2025-11-01 15:35:43.379665 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 15:35:43.379676 | orchestrator | rtt min/avg/max/mdev = 1.790/3.563/6.893/2.356 ms 2025-11-01 15:35:43.379703 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 15:35:43.379715 | orchestrator | + ping -c3 192.168.112.142 2025-11-01 15:35:43.392072 | orchestrator | PING 192.168.112.142 (192.168.112.142) 56(84) bytes of data. 2025-11-01 15:35:43.392100 | orchestrator | 64 bytes from 192.168.112.142: icmp_seq=1 ttl=63 time=8.28 ms 2025-11-01 15:35:44.387849 | orchestrator | 64 bytes from 192.168.112.142: icmp_seq=2 ttl=63 time=2.46 ms 2025-11-01 15:35:45.389745 | orchestrator | 64 bytes from 192.168.112.142: icmp_seq=3 ttl=63 time=1.83 ms 2025-11-01 15:35:45.389838 | orchestrator | 2025-11-01 15:35:45.389851 | orchestrator | --- 192.168.112.142 ping statistics --- 2025-11-01 15:35:45.389862 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 15:35:45.389872 | orchestrator | rtt min/avg/max/mdev = 1.828/4.191/8.283/2.905 ms 2025-11-01 15:35:45.390368 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 15:35:45.390389 | orchestrator | + ping -c3 192.168.112.200 2025-11-01 15:35:45.403183 | orchestrator | PING 192.168.112.200 (192.168.112.200) 56(84) bytes of data. 2025-11-01 15:35:45.403224 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=1 ttl=63 time=7.90 ms 2025-11-01 15:35:46.399647 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=2 ttl=63 time=2.49 ms 2025-11-01 15:35:47.400667 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=3 ttl=63 time=1.64 ms 2025-11-01 15:35:47.401234 | orchestrator | 2025-11-01 15:35:47.401264 | orchestrator | --- 192.168.112.200 ping statistics --- 2025-11-01 15:35:47.401278 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 15:35:47.401291 | orchestrator | rtt min/avg/max/mdev = 1.642/4.009/7.895/2.769 ms 2025-11-01 15:35:47.402414 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 15:35:47.402437 | orchestrator | + ping -c3 192.168.112.115 2025-11-01 15:35:47.412347 | orchestrator | PING 192.168.112.115 (192.168.112.115) 56(84) bytes of data. 2025-11-01 15:35:47.412405 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=1 ttl=63 time=5.67 ms 2025-11-01 15:35:48.411580 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=2 ttl=63 time=2.46 ms 2025-11-01 15:35:49.413482 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=3 ttl=63 time=1.94 ms 2025-11-01 15:35:49.413617 | orchestrator | 2025-11-01 15:35:49.413633 | orchestrator | --- 192.168.112.115 ping statistics --- 2025-11-01 15:35:49.413646 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-11-01 15:35:49.413658 | orchestrator | rtt min/avg/max/mdev = 1.937/3.354/5.671/1.651 ms 2025-11-01 15:35:49.413670 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 15:35:49.413682 | orchestrator | + ping -c3 192.168.112.130 2025-11-01 15:35:49.424097 | orchestrator | PING 192.168.112.130 (192.168.112.130) 56(84) bytes of data. 2025-11-01 15:35:49.424142 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=1 ttl=63 time=5.81 ms 2025-11-01 15:35:50.421023 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=2 ttl=63 time=1.62 ms 2025-11-01 15:35:51.423971 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=3 ttl=63 time=1.85 ms 2025-11-01 15:35:51.424059 | orchestrator | 2025-11-01 15:35:51.424074 | orchestrator | --- 192.168.112.130 ping statistics --- 2025-11-01 15:35:51.424087 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 15:35:51.424098 | orchestrator | rtt min/avg/max/mdev = 1.619/3.094/5.814/1.925 ms 2025-11-01 15:35:51.424109 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2025-11-01 15:35:54.931001 | orchestrator | 2025-11-01 15:35:54 | INFO  | Live migrating server 9e966b38-70d0-481e-b898-1aa4832c5d08 2025-11-01 15:36:07.211947 | orchestrator | 2025-11-01 15:36:07 | INFO  | Live migration of 9e966b38-70d0-481e-b898-1aa4832c5d08 (test-4) is still in progress 2025-11-01 15:36:09.634221 | orchestrator | 2025-11-01 15:36:09 | INFO  | Live migration of 9e966b38-70d0-481e-b898-1aa4832c5d08 (test-4) is still in progress 2025-11-01 15:36:12.005122 | orchestrator | 2025-11-01 15:36:12 | INFO  | Live migration of 9e966b38-70d0-481e-b898-1aa4832c5d08 (test-4) is still in progress 2025-11-01 15:36:14.350767 | orchestrator | 2025-11-01 15:36:14 | INFO  | Live migration of 9e966b38-70d0-481e-b898-1aa4832c5d08 (test-4) is still in progress 2025-11-01 15:36:16.697373 | orchestrator | 2025-11-01 15:36:16 | INFO  | Live migration of 9e966b38-70d0-481e-b898-1aa4832c5d08 (test-4) is still in progress 2025-11-01 15:36:19.040777 | orchestrator | 2025-11-01 15:36:19 | INFO  | Live migration of 9e966b38-70d0-481e-b898-1aa4832c5d08 (test-4) is still in progress 2025-11-01 15:36:21.375862 | orchestrator | 2025-11-01 15:36:21 | INFO  | Live migration of 9e966b38-70d0-481e-b898-1aa4832c5d08 (test-4) is still in progress 2025-11-01 15:36:23.732837 | orchestrator | 2025-11-01 15:36:23 | INFO  | Live migration of 9e966b38-70d0-481e-b898-1aa4832c5d08 (test-4) is still in progress 2025-11-01 15:36:26.092979 | orchestrator | 2025-11-01 15:36:26 | INFO  | Live migration of 9e966b38-70d0-481e-b898-1aa4832c5d08 (test-4) completed with status ACTIVE 2025-11-01 15:36:26.093075 | orchestrator | 2025-11-01 15:36:26 | INFO  | Live migrating server 030d3113-1726-4b4e-a28e-d4fa32489cb1 2025-11-01 15:36:39.346759 | orchestrator | 2025-11-01 15:36:39 | INFO  | Live migration of 030d3113-1726-4b4e-a28e-d4fa32489cb1 (test-3) is still in progress 2025-11-01 15:36:41.715122 | orchestrator | 2025-11-01 15:36:41 | INFO  | Live migration of 030d3113-1726-4b4e-a28e-d4fa32489cb1 (test-3) is still in progress 2025-11-01 15:36:44.122443 | orchestrator | 2025-11-01 15:36:44 | INFO  | Live migration of 030d3113-1726-4b4e-a28e-d4fa32489cb1 (test-3) is still in progress 2025-11-01 15:36:46.541118 | orchestrator | 2025-11-01 15:36:46 | INFO  | Live migration of 030d3113-1726-4b4e-a28e-d4fa32489cb1 (test-3) is still in progress 2025-11-01 15:36:48.914643 | orchestrator | 2025-11-01 15:36:48 | INFO  | Live migration of 030d3113-1726-4b4e-a28e-d4fa32489cb1 (test-3) is still in progress 2025-11-01 15:36:51.233707 | orchestrator | 2025-11-01 15:36:51 | INFO  | Live migration of 030d3113-1726-4b4e-a28e-d4fa32489cb1 (test-3) is still in progress 2025-11-01 15:36:53.552084 | orchestrator | 2025-11-01 15:36:53 | INFO  | Live migration of 030d3113-1726-4b4e-a28e-d4fa32489cb1 (test-3) is still in progress 2025-11-01 15:36:55.856845 | orchestrator | 2025-11-01 15:36:55 | INFO  | Live migration of 030d3113-1726-4b4e-a28e-d4fa32489cb1 (test-3) is still in progress 2025-11-01 15:36:58.239590 | orchestrator | 2025-11-01 15:36:58 | INFO  | Live migration of 030d3113-1726-4b4e-a28e-d4fa32489cb1 (test-3) completed with status ACTIVE 2025-11-01 15:36:58.239685 | orchestrator | 2025-11-01 15:36:58 | INFO  | Live migrating server cddbbbb3-b265-406d-ac82-421ab2caa036 2025-11-01 15:37:09.873412 | orchestrator | 2025-11-01 15:37:09 | INFO  | Live migration of cddbbbb3-b265-406d-ac82-421ab2caa036 (test-2) is still in progress 2025-11-01 15:37:12.260669 | orchestrator | 2025-11-01 15:37:12 | INFO  | Live migration of cddbbbb3-b265-406d-ac82-421ab2caa036 (test-2) is still in progress 2025-11-01 15:37:14.603998 | orchestrator | 2025-11-01 15:37:14 | INFO  | Live migration of cddbbbb3-b265-406d-ac82-421ab2caa036 (test-2) is still in progress 2025-11-01 15:37:17.117005 | orchestrator | 2025-11-01 15:37:17 | INFO  | Live migration of cddbbbb3-b265-406d-ac82-421ab2caa036 (test-2) is still in progress 2025-11-01 15:37:19.406288 | orchestrator | 2025-11-01 15:37:19 | INFO  | Live migration of cddbbbb3-b265-406d-ac82-421ab2caa036 (test-2) is still in progress 2025-11-01 15:37:21.946126 | orchestrator | 2025-11-01 15:37:21 | INFO  | Live migration of cddbbbb3-b265-406d-ac82-421ab2caa036 (test-2) is still in progress 2025-11-01 15:37:24.319949 | orchestrator | 2025-11-01 15:37:24 | INFO  | Live migration of cddbbbb3-b265-406d-ac82-421ab2caa036 (test-2) is still in progress 2025-11-01 15:37:26.803584 | orchestrator | 2025-11-01 15:37:26 | INFO  | Live migration of cddbbbb3-b265-406d-ac82-421ab2caa036 (test-2) is still in progress 2025-11-01 15:37:29.101881 | orchestrator | 2025-11-01 15:37:29 | INFO  | Live migration of cddbbbb3-b265-406d-ac82-421ab2caa036 (test-2) is still in progress 2025-11-01 15:37:31.409509 | orchestrator | 2025-11-01 15:37:31 | INFO  | Live migration of cddbbbb3-b265-406d-ac82-421ab2caa036 (test-2) completed with status ACTIVE 2025-11-01 15:37:31.409695 | orchestrator | 2025-11-01 15:37:31 | INFO  | Live migrating server 15652481-a585-45b8-aeac-1205b9ad1037 2025-11-01 15:37:41.844193 | orchestrator | 2025-11-01 15:37:41 | INFO  | Live migration of 15652481-a585-45b8-aeac-1205b9ad1037 (test-1) is still in progress 2025-11-01 15:37:44.215719 | orchestrator | 2025-11-01 15:37:44 | INFO  | Live migration of 15652481-a585-45b8-aeac-1205b9ad1037 (test-1) is still in progress 2025-11-01 15:37:46.596267 | orchestrator | 2025-11-01 15:37:46 | INFO  | Live migration of 15652481-a585-45b8-aeac-1205b9ad1037 (test-1) is still in progress 2025-11-01 15:37:48.974940 | orchestrator | 2025-11-01 15:37:48 | INFO  | Live migration of 15652481-a585-45b8-aeac-1205b9ad1037 (test-1) is still in progress 2025-11-01 15:37:51.349111 | orchestrator | 2025-11-01 15:37:51 | INFO  | Live migration of 15652481-a585-45b8-aeac-1205b9ad1037 (test-1) is still in progress 2025-11-01 15:37:53.712764 | orchestrator | 2025-11-01 15:37:53 | INFO  | Live migration of 15652481-a585-45b8-aeac-1205b9ad1037 (test-1) is still in progress 2025-11-01 15:37:56.013501 | orchestrator | 2025-11-01 15:37:56 | INFO  | Live migration of 15652481-a585-45b8-aeac-1205b9ad1037 (test-1) is still in progress 2025-11-01 15:37:58.321875 | orchestrator | 2025-11-01 15:37:58 | INFO  | Live migration of 15652481-a585-45b8-aeac-1205b9ad1037 (test-1) is still in progress 2025-11-01 15:38:00.647412 | orchestrator | 2025-11-01 15:38:00 | INFO  | Live migration of 15652481-a585-45b8-aeac-1205b9ad1037 (test-1) is still in progress 2025-11-01 15:38:03.018227 | orchestrator | 2025-11-01 15:38:03 | INFO  | Live migration of 15652481-a585-45b8-aeac-1205b9ad1037 (test-1) completed with status ACTIVE 2025-11-01 15:38:03.018330 | orchestrator | 2025-11-01 15:38:03 | INFO  | Live migrating server cab98295-ec13-4c97-8844-f7aa63f4f462 2025-11-01 15:38:14.738763 | orchestrator | 2025-11-01 15:38:14 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) is still in progress 2025-11-01 15:38:17.281904 | orchestrator | 2025-11-01 15:38:17 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) is still in progress 2025-11-01 15:38:19.701443 | orchestrator | 2025-11-01 15:38:19 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) is still in progress 2025-11-01 15:38:22.045293 | orchestrator | 2025-11-01 15:38:22 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) is still in progress 2025-11-01 15:38:24.379395 | orchestrator | 2025-11-01 15:38:24 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) is still in progress 2025-11-01 15:38:26.693645 | orchestrator | 2025-11-01 15:38:26 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) is still in progress 2025-11-01 15:38:28.989971 | orchestrator | 2025-11-01 15:38:28 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) is still in progress 2025-11-01 15:38:31.325831 | orchestrator | 2025-11-01 15:38:31 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) is still in progress 2025-11-01 15:38:33.659750 | orchestrator | 2025-11-01 15:38:33 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) is still in progress 2025-11-01 15:38:35.916883 | orchestrator | 2025-11-01 15:38:35 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) is still in progress 2025-11-01 15:38:38.216445 | orchestrator | 2025-11-01 15:38:38 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) completed with status ACTIVE 2025-11-01 15:38:38.558166 | orchestrator | + compute_list 2025-11-01 15:38:38.558223 | orchestrator | + osism manage compute list testbed-node-3 2025-11-01 15:38:41.520635 | orchestrator | +------+--------+----------+ 2025-11-01 15:38:41.521566 | orchestrator | | ID | Name | Status | 2025-11-01 15:38:41.521641 | orchestrator | |------+--------+----------| 2025-11-01 15:38:41.521654 | orchestrator | +------+--------+----------+ 2025-11-01 15:38:41.989488 | orchestrator | + osism manage compute list testbed-node-4 2025-11-01 15:38:45.450270 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 15:38:45.450372 | orchestrator | | ID | Name | Status | 2025-11-01 15:38:45.450386 | orchestrator | |--------------------------------------+--------+----------| 2025-11-01 15:38:45.450397 | orchestrator | | 9e966b38-70d0-481e-b898-1aa4832c5d08 | test-4 | ACTIVE | 2025-11-01 15:38:45.450408 | orchestrator | | 030d3113-1726-4b4e-a28e-d4fa32489cb1 | test-3 | ACTIVE | 2025-11-01 15:38:45.450419 | orchestrator | | cddbbbb3-b265-406d-ac82-421ab2caa036 | test-2 | ACTIVE | 2025-11-01 15:38:45.450430 | orchestrator | | 15652481-a585-45b8-aeac-1205b9ad1037 | test-1 | ACTIVE | 2025-11-01 15:38:45.450441 | orchestrator | | cab98295-ec13-4c97-8844-f7aa63f4f462 | test | ACTIVE | 2025-11-01 15:38:45.450452 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 15:38:45.786624 | orchestrator | + osism manage compute list testbed-node-5 2025-11-01 15:38:48.662925 | orchestrator | +------+--------+----------+ 2025-11-01 15:38:48.663013 | orchestrator | | ID | Name | Status | 2025-11-01 15:38:48.663021 | orchestrator | |------+--------+----------| 2025-11-01 15:38:48.663028 | orchestrator | +------+--------+----------+ 2025-11-01 15:38:49.027643 | orchestrator | + server_ping 2025-11-01 15:38:49.028756 | orchestrator | ++ tr -d '\r' 2025-11-01 15:38:49.028774 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-11-01 15:38:52.225811 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 15:38:52.225910 | orchestrator | + ping -c3 192.168.112.123 2025-11-01 15:38:52.236575 | orchestrator | PING 192.168.112.123 (192.168.112.123) 56(84) bytes of data. 2025-11-01 15:38:52.236602 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=1 ttl=63 time=7.97 ms 2025-11-01 15:38:53.233005 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=2 ttl=63 time=2.80 ms 2025-11-01 15:38:54.233965 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=3 ttl=63 time=1.55 ms 2025-11-01 15:38:54.234129 | orchestrator | 2025-11-01 15:38:54.234144 | orchestrator | --- 192.168.112.123 ping statistics --- 2025-11-01 15:38:54.234155 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 15:38:54.234165 | orchestrator | rtt min/avg/max/mdev = 1.545/4.104/7.968/2.779 ms 2025-11-01 15:38:54.234386 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 15:38:54.234909 | orchestrator | + ping -c3 192.168.112.142 2025-11-01 15:38:54.245786 | orchestrator | PING 192.168.112.142 (192.168.112.142) 56(84) bytes of data. 2025-11-01 15:38:54.245808 | orchestrator | 64 bytes from 192.168.112.142: icmp_seq=1 ttl=63 time=8.33 ms 2025-11-01 15:38:55.242271 | orchestrator | 64 bytes from 192.168.112.142: icmp_seq=2 ttl=63 time=3.02 ms 2025-11-01 15:38:56.242380 | orchestrator | 64 bytes from 192.168.112.142: icmp_seq=3 ttl=63 time=2.17 ms 2025-11-01 15:38:56.242483 | orchestrator | 2025-11-01 15:38:56.242499 | orchestrator | --- 192.168.112.142 ping statistics --- 2025-11-01 15:38:56.242513 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-11-01 15:38:56.242524 | orchestrator | rtt min/avg/max/mdev = 2.172/4.506/8.330/2.725 ms 2025-11-01 15:38:56.243236 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 15:38:56.243271 | orchestrator | + ping -c3 192.168.112.200 2025-11-01 15:38:56.255942 | orchestrator | PING 192.168.112.200 (192.168.112.200) 56(84) bytes of data. 2025-11-01 15:38:56.255993 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=1 ttl=63 time=7.77 ms 2025-11-01 15:38:57.251895 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=2 ttl=63 time=2.50 ms 2025-11-01 15:38:58.253993 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=3 ttl=63 time=1.90 ms 2025-11-01 15:38:58.254133 | orchestrator | 2025-11-01 15:38:58.254148 | orchestrator | --- 192.168.112.200 ping statistics --- 2025-11-01 15:38:58.254160 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 15:38:58.254172 | orchestrator | rtt min/avg/max/mdev = 1.898/4.056/7.772/2.639 ms 2025-11-01 15:38:58.254184 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 15:38:58.254195 | orchestrator | + ping -c3 192.168.112.115 2025-11-01 15:38:58.264607 | orchestrator | PING 192.168.112.115 (192.168.112.115) 56(84) bytes of data. 2025-11-01 15:38:58.264632 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=1 ttl=63 time=6.07 ms 2025-11-01 15:38:59.263042 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=2 ttl=63 time=2.66 ms 2025-11-01 15:39:00.264604 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=3 ttl=63 time=1.52 ms 2025-11-01 15:39:00.264692 | orchestrator | 2025-11-01 15:39:00.264706 | orchestrator | --- 192.168.112.115 ping statistics --- 2025-11-01 15:39:00.264739 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 15:39:00.264751 | orchestrator | rtt min/avg/max/mdev = 1.523/3.417/6.073/1.933 ms 2025-11-01 15:39:00.264763 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 15:39:00.264775 | orchestrator | + ping -c3 192.168.112.130 2025-11-01 15:39:00.280026 | orchestrator | PING 192.168.112.130 (192.168.112.130) 56(84) bytes of data. 2025-11-01 15:39:00.280054 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=1 ttl=63 time=8.91 ms 2025-11-01 15:39:01.275181 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=2 ttl=63 time=2.55 ms 2025-11-01 15:39:02.276778 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=3 ttl=63 time=1.75 ms 2025-11-01 15:39:02.276872 | orchestrator | 2025-11-01 15:39:02.276887 | orchestrator | --- 192.168.112.130 ping statistics --- 2025-11-01 15:39:02.276898 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 15:39:02.276910 | orchestrator | rtt min/avg/max/mdev = 1.747/4.402/8.910/3.204 ms 2025-11-01 15:39:02.276921 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2025-11-01 15:39:05.865427 | orchestrator | 2025-11-01 15:39:05 | INFO  | Live migrating server 9e966b38-70d0-481e-b898-1aa4832c5d08 2025-11-01 15:39:15.741463 | orchestrator | 2025-11-01 15:39:15 | INFO  | Live migration of 9e966b38-70d0-481e-b898-1aa4832c5d08 (test-4) is still in progress 2025-11-01 15:39:18.139449 | orchestrator | 2025-11-01 15:39:18 | INFO  | Live migration of 9e966b38-70d0-481e-b898-1aa4832c5d08 (test-4) is still in progress 2025-11-01 15:39:20.508400 | orchestrator | 2025-11-01 15:39:20 | INFO  | Live migration of 9e966b38-70d0-481e-b898-1aa4832c5d08 (test-4) is still in progress 2025-11-01 15:39:22.885366 | orchestrator | 2025-11-01 15:39:22 | INFO  | Live migration of 9e966b38-70d0-481e-b898-1aa4832c5d08 (test-4) is still in progress 2025-11-01 15:39:25.238338 | orchestrator | 2025-11-01 15:39:25 | INFO  | Live migration of 9e966b38-70d0-481e-b898-1aa4832c5d08 (test-4) is still in progress 2025-11-01 15:39:27.569848 | orchestrator | 2025-11-01 15:39:27 | INFO  | Live migration of 9e966b38-70d0-481e-b898-1aa4832c5d08 (test-4) is still in progress 2025-11-01 15:39:29.922611 | orchestrator | 2025-11-01 15:39:29 | INFO  | Live migration of 9e966b38-70d0-481e-b898-1aa4832c5d08 (test-4) is still in progress 2025-11-01 15:39:32.227198 | orchestrator | 2025-11-01 15:39:32 | INFO  | Live migration of 9e966b38-70d0-481e-b898-1aa4832c5d08 (test-4) is still in progress 2025-11-01 15:39:34.579498 | orchestrator | 2025-11-01 15:39:34 | INFO  | Live migration of 9e966b38-70d0-481e-b898-1aa4832c5d08 (test-4) is still in progress 2025-11-01 15:39:36.982355 | orchestrator | 2025-11-01 15:39:36 | INFO  | Live migration of 9e966b38-70d0-481e-b898-1aa4832c5d08 (test-4) completed with status ACTIVE 2025-11-01 15:39:36.982459 | orchestrator | 2025-11-01 15:39:36 | INFO  | Live migrating server 030d3113-1726-4b4e-a28e-d4fa32489cb1 2025-11-01 15:39:47.419938 | orchestrator | 2025-11-01 15:39:47 | INFO  | Live migration of 030d3113-1726-4b4e-a28e-d4fa32489cb1 (test-3) is still in progress 2025-11-01 15:39:49.816021 | orchestrator | 2025-11-01 15:39:49 | INFO  | Live migration of 030d3113-1726-4b4e-a28e-d4fa32489cb1 (test-3) is still in progress 2025-11-01 15:39:52.205140 | orchestrator | 2025-11-01 15:39:52 | INFO  | Live migration of 030d3113-1726-4b4e-a28e-d4fa32489cb1 (test-3) is still in progress 2025-11-01 15:39:54.600787 | orchestrator | 2025-11-01 15:39:54 | INFO  | Live migration of 030d3113-1726-4b4e-a28e-d4fa32489cb1 (test-3) is still in progress 2025-11-01 15:39:56.985031 | orchestrator | 2025-11-01 15:39:56 | INFO  | Live migration of 030d3113-1726-4b4e-a28e-d4fa32489cb1 (test-3) is still in progress 2025-11-01 15:39:59.400066 | orchestrator | 2025-11-01 15:39:59 | INFO  | Live migration of 030d3113-1726-4b4e-a28e-d4fa32489cb1 (test-3) is still in progress 2025-11-01 15:40:01.715340 | orchestrator | 2025-11-01 15:40:01 | INFO  | Live migration of 030d3113-1726-4b4e-a28e-d4fa32489cb1 (test-3) is still in progress 2025-11-01 15:40:04.037944 | orchestrator | 2025-11-01 15:40:04 | INFO  | Live migration of 030d3113-1726-4b4e-a28e-d4fa32489cb1 (test-3) is still in progress 2025-11-01 15:40:06.445728 | orchestrator | 2025-11-01 15:40:06 | INFO  | Live migration of 030d3113-1726-4b4e-a28e-d4fa32489cb1 (test-3) completed with status ACTIVE 2025-11-01 15:40:06.445810 | orchestrator | 2025-11-01 15:40:06 | INFO  | Live migrating server cddbbbb3-b265-406d-ac82-421ab2caa036 2025-11-01 15:40:16.857001 | orchestrator | 2025-11-01 15:40:16 | INFO  | Live migration of cddbbbb3-b265-406d-ac82-421ab2caa036 (test-2) is still in progress 2025-11-01 15:40:19.266533 | orchestrator | 2025-11-01 15:40:19 | INFO  | Live migration of cddbbbb3-b265-406d-ac82-421ab2caa036 (test-2) is still in progress 2025-11-01 15:40:21.680898 | orchestrator | 2025-11-01 15:40:21 | INFO  | Live migration of cddbbbb3-b265-406d-ac82-421ab2caa036 (test-2) is still in progress 2025-11-01 15:40:23.994971 | orchestrator | 2025-11-01 15:40:23 | INFO  | Live migration of cddbbbb3-b265-406d-ac82-421ab2caa036 (test-2) is still in progress 2025-11-01 15:40:26.374886 | orchestrator | 2025-11-01 15:40:26 | INFO  | Live migration of cddbbbb3-b265-406d-ac82-421ab2caa036 (test-2) is still in progress 2025-11-01 15:40:28.801398 | orchestrator | 2025-11-01 15:40:28 | INFO  | Live migration of cddbbbb3-b265-406d-ac82-421ab2caa036 (test-2) is still in progress 2025-11-01 15:40:31.134086 | orchestrator | 2025-11-01 15:40:31 | INFO  | Live migration of cddbbbb3-b265-406d-ac82-421ab2caa036 (test-2) is still in progress 2025-11-01 15:40:33.446090 | orchestrator | 2025-11-01 15:40:33 | INFO  | Live migration of cddbbbb3-b265-406d-ac82-421ab2caa036 (test-2) is still in progress 2025-11-01 15:40:35.725420 | orchestrator | 2025-11-01 15:40:35 | INFO  | Live migration of cddbbbb3-b265-406d-ac82-421ab2caa036 (test-2) is still in progress 2025-11-01 15:40:38.021108 | orchestrator | 2025-11-01 15:40:38 | INFO  | Live migration of cddbbbb3-b265-406d-ac82-421ab2caa036 (test-2) completed with status ACTIVE 2025-11-01 15:40:38.021215 | orchestrator | 2025-11-01 15:40:38 | INFO  | Live migrating server 15652481-a585-45b8-aeac-1205b9ad1037 2025-11-01 15:40:48.426575 | orchestrator | 2025-11-01 15:40:48 | INFO  | Live migration of 15652481-a585-45b8-aeac-1205b9ad1037 (test-1) is still in progress 2025-11-01 15:40:50.840238 | orchestrator | 2025-11-01 15:40:50 | INFO  | Live migration of 15652481-a585-45b8-aeac-1205b9ad1037 (test-1) is still in progress 2025-11-01 15:40:53.250527 | orchestrator | 2025-11-01 15:40:53 | INFO  | Live migration of 15652481-a585-45b8-aeac-1205b9ad1037 (test-1) is still in progress 2025-11-01 15:40:55.577317 | orchestrator | 2025-11-01 15:40:55 | INFO  | Live migration of 15652481-a585-45b8-aeac-1205b9ad1037 (test-1) is still in progress 2025-11-01 15:40:57.894810 | orchestrator | 2025-11-01 15:40:57 | INFO  | Live migration of 15652481-a585-45b8-aeac-1205b9ad1037 (test-1) is still in progress 2025-11-01 15:41:00.220094 | orchestrator | 2025-11-01 15:41:00 | INFO  | Live migration of 15652481-a585-45b8-aeac-1205b9ad1037 (test-1) is still in progress 2025-11-01 15:41:02.593132 | orchestrator | 2025-11-01 15:41:02 | INFO  | Live migration of 15652481-a585-45b8-aeac-1205b9ad1037 (test-1) is still in progress 2025-11-01 15:41:04.996899 | orchestrator | 2025-11-01 15:41:04 | INFO  | Live migration of 15652481-a585-45b8-aeac-1205b9ad1037 (test-1) is still in progress 2025-11-01 15:41:07.392810 | orchestrator | 2025-11-01 15:41:07 | INFO  | Live migration of 15652481-a585-45b8-aeac-1205b9ad1037 (test-1) completed with status ACTIVE 2025-11-01 15:41:07.392908 | orchestrator | 2025-11-01 15:41:07 | INFO  | Live migrating server cab98295-ec13-4c97-8844-f7aa63f4f462 2025-11-01 15:41:18.197039 | orchestrator | 2025-11-01 15:41:18 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) is still in progress 2025-11-01 15:41:20.555085 | orchestrator | 2025-11-01 15:41:20 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) is still in progress 2025-11-01 15:41:22.985288 | orchestrator | 2025-11-01 15:41:22 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) is still in progress 2025-11-01 15:41:25.351933 | orchestrator | 2025-11-01 15:41:25 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) is still in progress 2025-11-01 15:41:27.746768 | orchestrator | 2025-11-01 15:41:27 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) is still in progress 2025-11-01 15:41:30.143089 | orchestrator | 2025-11-01 15:41:30 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) is still in progress 2025-11-01 15:41:32.537323 | orchestrator | 2025-11-01 15:41:32 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) is still in progress 2025-11-01 15:41:34.831176 | orchestrator | 2025-11-01 15:41:34 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) is still in progress 2025-11-01 15:41:37.197029 | orchestrator | 2025-11-01 15:41:37 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) is still in progress 2025-11-01 15:41:39.587893 | orchestrator | 2025-11-01 15:41:39 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) is still in progress 2025-11-01 15:41:41.935914 | orchestrator | 2025-11-01 15:41:41 | INFO  | Live migration of cab98295-ec13-4c97-8844-f7aa63f4f462 (test) completed with status ACTIVE 2025-11-01 15:41:42.285780 | orchestrator | + compute_list 2025-11-01 15:41:42.285857 | orchestrator | + osism manage compute list testbed-node-3 2025-11-01 15:41:45.138191 | orchestrator | +------+--------+----------+ 2025-11-01 15:41:45.138884 | orchestrator | | ID | Name | Status | 2025-11-01 15:41:45.138910 | orchestrator | |------+--------+----------| 2025-11-01 15:41:45.138922 | orchestrator | +------+--------+----------+ 2025-11-01 15:41:45.685715 | orchestrator | + osism manage compute list testbed-node-4 2025-11-01 15:41:48.612194 | orchestrator | +------+--------+----------+ 2025-11-01 15:41:48.612291 | orchestrator | | ID | Name | Status | 2025-11-01 15:41:48.612300 | orchestrator | |------+--------+----------| 2025-11-01 15:41:48.612306 | orchestrator | +------+--------+----------+ 2025-11-01 15:41:49.098611 | orchestrator | + osism manage compute list testbed-node-5 2025-11-01 15:41:52.557255 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 15:41:52.557363 | orchestrator | | ID | Name | Status | 2025-11-01 15:41:52.557379 | orchestrator | |--------------------------------------+--------+----------| 2025-11-01 15:41:52.557391 | orchestrator | | 9e966b38-70d0-481e-b898-1aa4832c5d08 | test-4 | ACTIVE | 2025-11-01 15:41:52.557403 | orchestrator | | 030d3113-1726-4b4e-a28e-d4fa32489cb1 | test-3 | ACTIVE | 2025-11-01 15:41:52.557414 | orchestrator | | cddbbbb3-b265-406d-ac82-421ab2caa036 | test-2 | ACTIVE | 2025-11-01 15:41:52.557425 | orchestrator | | 15652481-a585-45b8-aeac-1205b9ad1037 | test-1 | ACTIVE | 2025-11-01 15:41:52.557436 | orchestrator | | cab98295-ec13-4c97-8844-f7aa63f4f462 | test | ACTIVE | 2025-11-01 15:41:52.557447 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 15:41:52.891809 | orchestrator | + server_ping 2025-11-01 15:41:52.892899 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-11-01 15:41:52.893465 | orchestrator | ++ tr -d '\r' 2025-11-01 15:41:55.861252 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 15:41:55.861357 | orchestrator | + ping -c3 192.168.112.123 2025-11-01 15:41:55.875760 | orchestrator | PING 192.168.112.123 (192.168.112.123) 56(84) bytes of data. 2025-11-01 15:41:55.875821 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=1 ttl=63 time=9.64 ms 2025-11-01 15:41:56.870734 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=2 ttl=63 time=2.53 ms 2025-11-01 15:41:57.872925 | orchestrator | 64 bytes from 192.168.112.123: icmp_seq=3 ttl=63 time=2.01 ms 2025-11-01 15:41:57.873705 | orchestrator | 2025-11-01 15:41:57.873739 | orchestrator | --- 192.168.112.123 ping statistics --- 2025-11-01 15:41:57.873753 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-11-01 15:41:57.873764 | orchestrator | rtt min/avg/max/mdev = 2.012/4.724/9.635/3.478 ms 2025-11-01 15:41:57.873777 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 15:41:57.873801 | orchestrator | + ping -c3 192.168.112.142 2025-11-01 15:41:57.887687 | orchestrator | PING 192.168.112.142 (192.168.112.142) 56(84) bytes of data. 2025-11-01 15:41:57.887727 | orchestrator | 64 bytes from 192.168.112.142: icmp_seq=1 ttl=63 time=10.3 ms 2025-11-01 15:41:58.882826 | orchestrator | 64 bytes from 192.168.112.142: icmp_seq=2 ttl=63 time=3.38 ms 2025-11-01 15:41:59.883995 | orchestrator | 64 bytes from 192.168.112.142: icmp_seq=3 ttl=63 time=1.75 ms 2025-11-01 15:41:59.884088 | orchestrator | 2025-11-01 15:41:59.884103 | orchestrator | --- 192.168.112.142 ping statistics --- 2025-11-01 15:41:59.884116 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-11-01 15:41:59.884127 | orchestrator | rtt min/avg/max/mdev = 1.749/5.148/10.313/3.712 ms 2025-11-01 15:41:59.884182 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 15:41:59.884196 | orchestrator | + ping -c3 192.168.112.200 2025-11-01 15:41:59.894219 | orchestrator | PING 192.168.112.200 (192.168.112.200) 56(84) bytes of data. 2025-11-01 15:41:59.894244 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=1 ttl=63 time=5.38 ms 2025-11-01 15:42:00.893031 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=2 ttl=63 time=2.24 ms 2025-11-01 15:42:01.895275 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=3 ttl=63 time=1.98 ms 2025-11-01 15:42:01.895367 | orchestrator | 2025-11-01 15:42:01.895381 | orchestrator | --- 192.168.112.200 ping statistics --- 2025-11-01 15:42:01.895394 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 15:42:01.895405 | orchestrator | rtt min/avg/max/mdev = 1.983/3.203/5.383/1.544 ms 2025-11-01 15:42:01.895486 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 15:42:01.895502 | orchestrator | + ping -c3 192.168.112.115 2025-11-01 15:42:01.909330 | orchestrator | PING 192.168.112.115 (192.168.112.115) 56(84) bytes of data. 2025-11-01 15:42:01.909391 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=1 ttl=63 time=8.53 ms 2025-11-01 15:42:02.904920 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=2 ttl=63 time=2.30 ms 2025-11-01 15:42:03.905618 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=3 ttl=63 time=1.67 ms 2025-11-01 15:42:03.905711 | orchestrator | 2025-11-01 15:42:03.905725 | orchestrator | --- 192.168.112.115 ping statistics --- 2025-11-01 15:42:03.905738 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 15:42:03.905749 | orchestrator | rtt min/avg/max/mdev = 1.667/4.164/8.531/3.098 ms 2025-11-01 15:42:03.906294 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 15:42:03.906320 | orchestrator | + ping -c3 192.168.112.130 2025-11-01 15:42:03.921446 | orchestrator | PING 192.168.112.130 (192.168.112.130) 56(84) bytes of data. 2025-11-01 15:42:03.921470 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=1 ttl=63 time=9.73 ms 2025-11-01 15:42:04.915956 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=2 ttl=63 time=2.87 ms 2025-11-01 15:42:05.915934 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=3 ttl=63 time=1.82 ms 2025-11-01 15:42:05.916043 | orchestrator | 2025-11-01 15:42:05.916061 | orchestrator | --- 192.168.112.130 ping statistics --- 2025-11-01 15:42:05.916075 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 15:42:05.916086 | orchestrator | rtt min/avg/max/mdev = 1.822/4.808/9.733/3.508 ms 2025-11-01 15:42:06.006639 | orchestrator | ok: Runtime: 0:21:13.500901 2025-11-01 15:42:06.058539 | 2025-11-01 15:42:06.058717 | TASK [Run tempest] 2025-11-01 15:42:06.595183 | orchestrator | skipping: Conditional result was False 2025-11-01 15:42:06.613767 | 2025-11-01 15:42:06.613940 | TASK [Check prometheus alert status] 2025-11-01 15:42:07.165948 | orchestrator | skipping: Conditional result was False 2025-11-01 15:42:07.168847 | 2025-11-01 15:42:07.168996 | PLAY RECAP 2025-11-01 15:42:07.169107 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-11-01 15:42:07.169156 | 2025-11-01 15:42:07.397481 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-11-01 15:42:07.399685 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-11-01 15:42:08.141061 | 2025-11-01 15:42:08.141220 | PLAY [Post output play] 2025-11-01 15:42:08.157913 | 2025-11-01 15:42:08.158050 | LOOP [stage-output : Register sources] 2025-11-01 15:42:08.219985 | 2025-11-01 15:42:08.220194 | TASK [stage-output : Check sudo] 2025-11-01 15:42:09.043512 | orchestrator | sudo: a password is required 2025-11-01 15:42:09.260423 | orchestrator | ok: Runtime: 0:00:00.012877 2025-11-01 15:42:09.275743 | 2025-11-01 15:42:09.275903 | LOOP [stage-output : Set source and destination for files and folders] 2025-11-01 15:42:09.318257 | 2025-11-01 15:42:09.318581 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-11-01 15:42:09.398206 | orchestrator | ok 2025-11-01 15:42:09.407376 | 2025-11-01 15:42:09.407515 | LOOP [stage-output : Ensure target folders exist] 2025-11-01 15:42:09.837345 | orchestrator | ok: "docs" 2025-11-01 15:42:09.837675 | 2025-11-01 15:42:10.066034 | orchestrator | ok: "artifacts" 2025-11-01 15:42:10.304695 | orchestrator | ok: "logs" 2025-11-01 15:42:10.328305 | 2025-11-01 15:42:10.328529 | LOOP [stage-output : Copy files and folders to staging folder] 2025-11-01 15:42:10.367856 | 2025-11-01 15:42:10.368120 | TASK [stage-output : Make all log files readable] 2025-11-01 15:42:10.637724 | orchestrator | ok 2025-11-01 15:42:10.646431 | 2025-11-01 15:42:10.646570 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-11-01 15:42:10.671262 | orchestrator | skipping: Conditional result was False 2025-11-01 15:42:10.686635 | 2025-11-01 15:42:10.686888 | TASK [stage-output : Discover log files for compression] 2025-11-01 15:42:10.702194 | orchestrator | skipping: Conditional result was False 2025-11-01 15:42:10.714725 | 2025-11-01 15:42:10.714947 | LOOP [stage-output : Archive everything from logs] 2025-11-01 15:42:10.754539 | 2025-11-01 15:42:10.754685 | PLAY [Post cleanup play] 2025-11-01 15:42:10.763020 | 2025-11-01 15:42:10.763118 | TASK [Set cloud fact (Zuul deployment)] 2025-11-01 15:42:10.821210 | orchestrator | ok 2025-11-01 15:42:10.831941 | 2025-11-01 15:42:10.832054 | TASK [Set cloud fact (local deployment)] 2025-11-01 15:42:10.866119 | orchestrator | skipping: Conditional result was False 2025-11-01 15:42:10.882362 | 2025-11-01 15:42:10.882504 | TASK [Clean the cloud environment] 2025-11-01 15:42:13.045375 | orchestrator | 2025-11-01 15:42:13 - clean up servers 2025-11-01 15:42:13.797957 | orchestrator | 2025-11-01 15:42:13 - testbed-manager 2025-11-01 15:42:13.882386 | orchestrator | 2025-11-01 15:42:13 - testbed-node-0 2025-11-01 15:42:13.970419 | orchestrator | 2025-11-01 15:42:13 - testbed-node-5 2025-11-01 15:42:14.057839 | orchestrator | 2025-11-01 15:42:14 - testbed-node-3 2025-11-01 15:42:14.157307 | orchestrator | 2025-11-01 15:42:14 - testbed-node-4 2025-11-01 15:42:14.247285 | orchestrator | 2025-11-01 15:42:14 - testbed-node-2 2025-11-01 15:42:14.340832 | orchestrator | 2025-11-01 15:42:14 - testbed-node-1 2025-11-01 15:42:14.429196 | orchestrator | 2025-11-01 15:42:14 - clean up keypairs 2025-11-01 15:42:14.451287 | orchestrator | 2025-11-01 15:42:14 - testbed 2025-11-01 15:42:14.477124 | orchestrator | 2025-11-01 15:42:14 - wait for servers to be gone 2025-11-01 15:42:23.160804 | orchestrator | 2025-11-01 15:42:23 - clean up ports 2025-11-01 15:42:23.334654 | orchestrator | 2025-11-01 15:42:23 - 0d19cd6e-63eb-42ba-bc10-d19c33d5b9f1 2025-11-01 15:42:23.748441 | orchestrator | 2025-11-01 15:42:23 - 1143efdb-da79-4a98-a54e-0e91af19eef6 2025-11-01 15:42:23.997785 | orchestrator | 2025-11-01 15:42:23 - 570e4d5c-179c-4fb7-8bcb-f102d92eb0a4 2025-11-01 15:42:24.217445 | orchestrator | 2025-11-01 15:42:24 - 78605ead-973e-42b3-b03c-729a6f2ecc72 2025-11-01 15:42:24.458196 | orchestrator | 2025-11-01 15:42:24 - bb493ff1-d387-4b78-89de-f49fcf990c43 2025-11-01 15:42:24.668744 | orchestrator | 2025-11-01 15:42:24 - d5cbe547-131e-471c-bad8-f0dff1e0e2f6 2025-11-01 15:42:24.871155 | orchestrator | 2025-11-01 15:42:24 - ea6e700a-56b8-4897-9d57-41c809951b45 2025-11-01 15:42:25.086495 | orchestrator | 2025-11-01 15:42:25 - clean up volumes 2025-11-01 15:42:25.193373 | orchestrator | 2025-11-01 15:42:25 - testbed-volume-4-node-base 2025-11-01 15:42:25.230616 | orchestrator | 2025-11-01 15:42:25 - testbed-volume-2-node-base 2025-11-01 15:42:25.268265 | orchestrator | 2025-11-01 15:42:25 - testbed-volume-1-node-base 2025-11-01 15:42:25.306804 | orchestrator | 2025-11-01 15:42:25 - testbed-volume-3-node-base 2025-11-01 15:42:25.346317 | orchestrator | 2025-11-01 15:42:25 - testbed-volume-0-node-base 2025-11-01 15:42:25.386776 | orchestrator | 2025-11-01 15:42:25 - testbed-volume-5-node-base 2025-11-01 15:42:25.425426 | orchestrator | 2025-11-01 15:42:25 - testbed-volume-manager-base 2025-11-01 15:42:25.466182 | orchestrator | 2025-11-01 15:42:25 - testbed-volume-8-node-5 2025-11-01 15:42:25.503085 | orchestrator | 2025-11-01 15:42:25 - testbed-volume-2-node-5 2025-11-01 15:42:25.543058 | orchestrator | 2025-11-01 15:42:25 - testbed-volume-7-node-4 2025-11-01 15:42:25.582922 | orchestrator | 2025-11-01 15:42:25 - testbed-volume-5-node-5 2025-11-01 15:42:25.622112 | orchestrator | 2025-11-01 15:42:25 - testbed-volume-6-node-3 2025-11-01 15:42:25.658867 | orchestrator | 2025-11-01 15:42:25 - testbed-volume-3-node-3 2025-11-01 15:42:25.696595 | orchestrator | 2025-11-01 15:42:25 - testbed-volume-0-node-3 2025-11-01 15:42:25.737455 | orchestrator | 2025-11-01 15:42:25 - testbed-volume-1-node-4 2025-11-01 15:42:25.776855 | orchestrator | 2025-11-01 15:42:25 - testbed-volume-4-node-4 2025-11-01 15:42:25.818255 | orchestrator | 2025-11-01 15:42:25 - disconnect routers 2025-11-01 15:42:25.921073 | orchestrator | 2025-11-01 15:42:25 - testbed 2025-11-01 15:42:27.007696 | orchestrator | 2025-11-01 15:42:27 - clean up subnets 2025-11-01 15:42:27.064137 | orchestrator | 2025-11-01 15:42:27 - subnet-testbed-management 2025-11-01 15:42:27.228849 | orchestrator | 2025-11-01 15:42:27 - clean up networks 2025-11-01 15:42:27.884590 | orchestrator | 2025-11-01 15:42:27 - net-testbed-management 2025-11-01 15:42:28.166658 | orchestrator | 2025-11-01 15:42:28 - clean up security groups 2025-11-01 15:42:28.212741 | orchestrator | 2025-11-01 15:42:28 - testbed-node 2025-11-01 15:42:28.331097 | orchestrator | 2025-11-01 15:42:28 - testbed-management 2025-11-01 15:42:28.442332 | orchestrator | 2025-11-01 15:42:28 - clean up floating ips 2025-11-01 15:42:28.477243 | orchestrator | 2025-11-01 15:42:28 - 81.163.192.208 2025-11-01 15:42:28.814294 | orchestrator | 2025-11-01 15:42:28 - clean up routers 2025-11-01 15:42:28.875815 | orchestrator | 2025-11-01 15:42:28 - testbed 2025-11-01 15:42:29.939447 | orchestrator | ok: Runtime: 0:00:18.580573 2025-11-01 15:42:29.943787 | 2025-11-01 15:42:29.943956 | PLAY RECAP 2025-11-01 15:42:29.944085 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-11-01 15:42:29.944145 | 2025-11-01 15:42:30.070260 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-11-01 15:42:30.072673 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-11-01 15:42:30.832868 | 2025-11-01 15:42:30.833069 | PLAY [Cleanup play] 2025-11-01 15:42:30.849177 | 2025-11-01 15:42:30.849329 | TASK [Set cloud fact (Zuul deployment)] 2025-11-01 15:42:30.904935 | orchestrator | ok 2025-11-01 15:42:30.917565 | 2025-11-01 15:42:30.917810 | TASK [Set cloud fact (local deployment)] 2025-11-01 15:42:30.942614 | orchestrator | skipping: Conditional result was False 2025-11-01 15:42:30.958194 | 2025-11-01 15:42:30.958390 | TASK [Clean the cloud environment] 2025-11-01 15:42:32.083963 | orchestrator | 2025-11-01 15:42:32 - clean up servers 2025-11-01 15:42:32.565771 | orchestrator | 2025-11-01 15:42:32 - clean up keypairs 2025-11-01 15:42:32.582337 | orchestrator | 2025-11-01 15:42:32 - wait for servers to be gone 2025-11-01 15:42:32.622630 | orchestrator | 2025-11-01 15:42:32 - clean up ports 2025-11-01 15:42:32.699524 | orchestrator | 2025-11-01 15:42:32 - clean up volumes 2025-11-01 15:42:32.759946 | orchestrator | 2025-11-01 15:42:32 - disconnect routers 2025-11-01 15:42:32.806136 | orchestrator | 2025-11-01 15:42:32 - clean up subnets 2025-11-01 15:42:32.828482 | orchestrator | 2025-11-01 15:42:32 - clean up networks 2025-11-01 15:42:33.024815 | orchestrator | 2025-11-01 15:42:33 - clean up security groups 2025-11-01 15:42:33.076350 | orchestrator | 2025-11-01 15:42:33 - clean up floating ips 2025-11-01 15:42:33.100134 | orchestrator | 2025-11-01 15:42:33 - clean up routers 2025-11-01 15:42:33.503463 | orchestrator | ok: Runtime: 0:00:01.401977 2025-11-01 15:42:33.505107 | 2025-11-01 15:42:33.505188 | PLAY RECAP 2025-11-01 15:42:33.505239 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-11-01 15:42:33.505263 | 2025-11-01 15:42:33.623774 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-11-01 15:42:33.624764 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-11-01 15:42:34.385726 | 2025-11-01 15:42:34.385876 | PLAY [Base post-fetch] 2025-11-01 15:42:34.400906 | 2025-11-01 15:42:34.401029 | TASK [fetch-output : Set log path for multiple nodes] 2025-11-01 15:42:34.457366 | orchestrator | skipping: Conditional result was False 2025-11-01 15:42:34.466325 | 2025-11-01 15:42:34.466464 | TASK [fetch-output : Set log path for single node] 2025-11-01 15:42:34.510412 | orchestrator | ok 2025-11-01 15:42:34.517885 | 2025-11-01 15:42:34.518003 | LOOP [fetch-output : Ensure local output dirs] 2025-11-01 15:42:35.003274 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/25805c129ff442398dcdfabb9a23ba03/work/logs" 2025-11-01 15:42:35.279026 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/25805c129ff442398dcdfabb9a23ba03/work/artifacts" 2025-11-01 15:42:35.533657 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/25805c129ff442398dcdfabb9a23ba03/work/docs" 2025-11-01 15:42:35.545398 | 2025-11-01 15:42:35.545526 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-11-01 15:42:36.468015 | orchestrator | changed: .d..t...... ./ 2025-11-01 15:42:36.468418 | orchestrator | changed: All items complete 2025-11-01 15:42:36.468487 | 2025-11-01 15:42:37.175389 | orchestrator | changed: .d..t...... ./ 2025-11-01 15:42:37.904733 | orchestrator | changed: .d..t...... ./ 2025-11-01 15:42:37.927754 | 2025-11-01 15:42:37.927906 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-11-01 15:42:37.961111 | orchestrator | skipping: Conditional result was False 2025-11-01 15:42:37.963620 | orchestrator | skipping: Conditional result was False 2025-11-01 15:42:37.979975 | 2025-11-01 15:42:37.980055 | PLAY RECAP 2025-11-01 15:42:37.980107 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-11-01 15:42:37.980134 | 2025-11-01 15:42:38.094240 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-11-01 15:42:38.096606 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-11-01 15:42:38.836215 | 2025-11-01 15:42:38.836409 | PLAY [Base post] 2025-11-01 15:42:38.850471 | 2025-11-01 15:42:38.850595 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-11-01 15:42:39.792268 | orchestrator | changed 2025-11-01 15:42:39.801556 | 2025-11-01 15:42:39.801667 | PLAY RECAP 2025-11-01 15:42:39.801748 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-11-01 15:42:39.801830 | 2025-11-01 15:42:39.917095 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-11-01 15:42:39.919155 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-11-01 15:42:40.689829 | 2025-11-01 15:42:40.689989 | PLAY [Base post-logs] 2025-11-01 15:42:40.700422 | 2025-11-01 15:42:40.700547 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-11-01 15:42:41.151201 | localhost | changed 2025-11-01 15:42:41.161091 | 2025-11-01 15:42:41.161225 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-11-01 15:42:41.198158 | localhost | ok 2025-11-01 15:42:41.204919 | 2025-11-01 15:42:41.205090 | TASK [Set zuul-log-path fact] 2025-11-01 15:42:41.224637 | localhost | ok 2025-11-01 15:42:41.241144 | 2025-11-01 15:42:41.241349 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-11-01 15:42:41.279114 | localhost | ok 2025-11-01 15:42:41.286165 | 2025-11-01 15:42:41.286342 | TASK [upload-logs : Create log directories] 2025-11-01 15:42:41.796399 | localhost | changed 2025-11-01 15:42:41.801905 | 2025-11-01 15:42:41.802057 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-11-01 15:42:42.283115 | localhost -> localhost | ok: Runtime: 0:00:00.007148 2025-11-01 15:42:42.287378 | 2025-11-01 15:42:42.287498 | TASK [upload-logs : Upload logs to log server] 2025-11-01 15:42:42.833086 | localhost | Output suppressed because no_log was given 2025-11-01 15:42:42.836585 | 2025-11-01 15:42:42.836747 | LOOP [upload-logs : Compress console log and json output] 2025-11-01 15:42:42.893668 | localhost | skipping: Conditional result was False 2025-11-01 15:42:42.898864 | localhost | skipping: Conditional result was False 2025-11-01 15:42:42.911080 | 2025-11-01 15:42:42.911292 | LOOP [upload-logs : Upload compressed console log and json output] 2025-11-01 15:42:42.956919 | localhost | skipping: Conditional result was False 2025-11-01 15:42:42.957516 | 2025-11-01 15:42:42.961289 | localhost | skipping: Conditional result was False 2025-11-01 15:42:42.974693 | 2025-11-01 15:42:42.974967 | LOOP [upload-logs : Upload console log and json output]